Modern computers have processors with many cores, and spreading load across those cores maximises performance, and therefore speed of execution. Coding such multi-threaded code was hard, until Go came along and ruined it by making it so easy.
In this article, we will explore:
In our above code, the background task doesn’t get chance to write “do something,” before the program has ended — at which point, all goroutines are terminated.
To solve this, we could add a sleep operation at the bottom of our main function (with `time.Sleep`) but that’s not a very nice solution — because we don’t know how long our `doSomething` function might need to run.
As you can see, we’ve instantiated a new sync.WaitGroup and then called the .Add(1) method, before attempting to execute our goroutine.
We’ve updated the function to take in a pointer to our existing sync.WaitGroup and then called the .Done() method once we have successfully finished our task.
Finally, on line 19, we call waitgroup.Wait() to block the execution of our main() function until the goroutines in the waitgroup have successfully completed.
When we run this program, we should now see the following output:
In one of my production applications, I was tasked with creating an API that interfaced with a tonne of other APIs and aggregated the results up into one response.
Each of these API calls took roughly 2-3 seconds to return a response and due to the sheer number of API calls I had to make, doing this synchronously was out of the question.
In order to make this endpoint usable, I would have to employ goroutines and perform these requests asynchronously.
In this tutorial, we learned the basics of WaitGroups, including what they are and how we can use them within our own highly performant applications in Go.
If you enjoyed this tutorial or have any comments/suggestions, then please feel free to let me know in the comments section below, or in the suggestions section at the side!