In virtualization theory we don't recommend swap files (virtual memory) but instead suggest that the underlying hypervisor be allowed to do any swapping of RAM to disk as needed. This is sound practice when the systems administrator has sufficient control over the hypervisor (such as when using VMware ESXi) to tune the hypervisor's performance. However, when using public cloud platforms (e.g. Rackspace, Amazon AWS, Google Compute Engine or Digital Ocean), the systems administrator gives up this ability. Additionally, by default most of the public cloud providers do not configure swap within their native virtual machine images. Instead their cloud instances have a limited amount of RAM available and when that memory is exhausted, out of memory errors will occur, often leading to application crashes in production environments.
When dealing with the Linux operating system, the kernel will often terminate a process that experiences the out of memory error, and without proper monitoring this can be very embarrassing. This is where I personally apply my belief that inefficiency is better than inoperability. Otherwise said: degrade but don't break.
There are two choices when confronted with out of memory situations in public cloud environments: First, increase the size of the cloud instance (and pay more money) or Second, create a host file for the edge cases where virtual memory is needed. The first option is highly discouraged for situations where page swapping is very rare. For instances, if you swap only during nightly maintenance operations, why pay more money. Additionally, the first option only works so long as you size the server instance large enough that no possible scenario will ever exhaust the available RAM. Sure, sometimes it's better to size-up the servers. But in most cases, the second option gives more power to the systems administration team.
The second option allows an inefficiency to occur (e.g. degraded performance) in ANY case where memory is exhausted rather than a hard crash. This also allows the network monitoring system (e.g. Stackdriver, Nagios, Zenoss or Zabbix) to record instances where memory swapping does occur so that systems engineers can determine if a cloud instance resize is needed or if the swapping was a statistical abnormality. The end result is a lower cost platform with greater stability, especially when under attack.
Implementing the solution is a simple operation:
sudo su -
#create a swapfile (this is a 1GB swap)
dd if=/dev/zero of=/swap bs=1024 count=1048576
#format the swap file
#turn on the swap file
#add the swapfile to the /etc/fstab so it mounts after reboot.
echo "/swap swap swap defaults 0 0" >> /etc/fstab