
- #DOCKER FOR MAC UPGRADE FOR MAC#
- #DOCKER FOR MAC UPGRADE CODE#
If you’re spinning up many docker instances locally you’re either going to hurt your workstations fans (think about the heat from all that RAM that now is loaded+heat) or you’re going to put extra stress on your laptop battery/kill SSD if swap is enabled. General synopsis, You shouldn’t be running more than a few Docker instances locally for testing, they should be small, if you’re a Dev Stop putting so many dependencies in your yaml. For example if I run Debian, in my Docker container I can specify CentOS user space and that is what will be present in my container + my own systems native kernel.
#DOCKER FOR MAC UPGRADE FOR MAC#
I know that Docker for Windows is a new product and is still in its infancy.Docker for Windows = no local Linux kernelĭocker for Mac is a bit older.but still same problems as above.ĭocker for Linux uses the native host’s kernel, and then changes out the user space level OS. If anyone is curious, I wrote exact steps on how I have everything set up here: Likewise 250kb of SCSS going through a lot of loaders takes 1.8 seconds without caching or optimizing anything. With webpack I load in 1.4mb of Javascript through babel and without any caching it takes around 500ms for it to compile a change.
The only exception is when dealing with assets. Even rails applications process changes and serve requests in 100ms or less in development. With all of that going on, web applications with hundreds files reload faster than I can move my mouse to click refresh in a browser.
#DOCKER FOR MAC UPGRADE CODE#
That WSL mounted source code is also mounted back into Docker for Windows. All of my source code is mounted from an external HD (not SSD) into WSL. Windows Subsystem for Linux routes its own Docker client to that Docker for Windows daemon. Docker for Windows runs the Docker daemon. The amount of indirection going on with my Windows box is high, yet I'm happy with the performance. I am using an i5 3.2GHz with an SSD (about 3 years old). Not all set ups with Docker for Windows are very slow.įor example, I moved from running Docker on Linux to Docker for Windows on my dev box and while I did notice a slow down in volume mount performance, it didn't get unusably slow but there's room for improvement. Using Linux is definitely the better option for docker. The first two were enough for the MacOS user on my dev team to make the experience less painful. Never used the latter two myself, although I heard they work well enough for a dev environment. You can speed up file sharing by using NFS instead of the native MacOS, like in. Sync code between host and container using instead of shared volumes. You might not need that strict guarantees): Try different volume caching strategies (default is to always synchronize, which is slow. This will speed up access of the app to the cache folder If the app in your container creates cache files (like compiled code), define this cache directory as a separate volume and don't export it to your host. While still slower than running natively on Linux, there are some mitigations for performance problems for shared folders on MacOS: I've been using Linux now for about a year and have been loving native Docker without messing around. Since I did this I believe a tool has been released to handle this automatically for you, but I can't remember what it's called. I've done this before using unison and you can achieve near-native performance then on macOS. If you don't mount anything from the host machine, you are likely to see near-native performance, as long as you haven't run out of other resources (CPU, memory, network).Įdit: To add to this, you may have realised from what I've just been saying - if you didn't use any host-mounted folders ever, you could actually sync the code into your VM.
In other words, if you mount anything into your containers from your host machine, you are very likely to see worse performance. Docker for Mac uses a custom one called osxfs to share the host filesystem with the VM it runs. Even then, it's not the disks of the VMs themselves, it's mounting folders from the host onto a VM - because it's not a native filesystem, solutions like NFS exist, different Hypervisors have their own filesystems. The probably is not even usually CPU, memory, or network - it is almost always entirely down to the disk performance.