**The biggest 'I wish I knew this earlier' when doing ssh workflows on small projects** Here's probably the most useful thing that I've learnt this year working with computer clusters remotely. If this sounds obvious to you and you’re already good at this stuff feel free to ignore this post! --- If you're like me and a beginner on working on remote HPC clusters with Slurm for computational physics / chemistry, one thing that I naively used to do was - write python scripts, input files and logic on my local computer, _using IDEs and tools on my laptop,_ and then - ssh / rsync them to the remote cluster, and then - do the computation, and then - ssh or rsync them back. But this is kind of annoying for small projects when you just start out. Especially when you end up writing a bunch of boiler plate code to sync files back and forth - leading to more and more lines of code. And guess what - I spent too much time on that code! I don't want this to happen because I generally want to [delete as much code as possible](https://programmingisterrible.com/post/139222674273/write-code-that-is-easy-to-delete-not-easy-to) if the tradeoff between complexity and speed is worth it. So instead, I realised that I can use [macFUSE with sshfs](https://macfuse.github.io/) to mount the remote directory via to my local, so that **_I can just treat it like a directory as any other_**. There are some fiddly bits like changing your security on permissions on mac M1, but I found the tutorial pretty easy to follow.