Cloud functions
Been looking at various options for function like service to host API type stuff & thought I'd share some findings ramblings.
Focus here is mostly on low usage in the context of free tiers ala cheapskate. API is consumer facing so can't have second long cold starts.
TLDR: GCP wins for my specific usage case (avoid cold starts, full featured execution env). Honourable mention to Cloudflare workers and AWS Lambda for some intriguing aspects.
Please do forgive the fairly low quality stream of thought write-up.
Google Cloud
Most familiar with this and has an idle billing tier, so focused on this more than rest.
I've been using python, but will change this since I discovered lang choice matters.
Go vs Python, trivial hello world stuff, gen2 128mb sizing. Execution time looks about the same. The memory utilization on the other hand 12mb Go, 70mb Python. Given that sizing of these functions starts at 128mb...that 70mb for a python hello world is definitely notable.
Then there is the cold start issue. I've had significant issues with this in the past, so the new min instance mechanism to keep them warm seems basically compulsory.
Long story short it costs ~1USD to keep a function warm per function. Of which free tier absorbs about 0.9 functions...so basically you get one free and each subsequent warm one it a dollar.
Gen 2 also has better concurrency support, however there is a major gotcha hidden in the documentation:
For preview, this will only be supported by .NET, Java, Node.js, and Go runtimes, for functions with 1 or more vCPUs.
Note specifically the 1 vcpu...not obvious in LES context, but that's not bottom end - 0.083 is...so 1vcpu is a fairly chunky sizing which basically means ~$8 min a month. No concurrency means two simultaneous request need more than 1 instance...back to cold starts even with 1 min to keep it warm. I suppose you could keep two warm, which given <10m execution time means chances of a cold start are near zero. Even one may be an acceptable risk though. Alternatively one could keep one artificially warm via timer but that's quite janky.
Does provide a strong incentive to pack as much stuff into one omnibus function though, which goes against the conceptual simple connector/glue type thinking that functions originally were.
Digital Ocean
Free allowance tier looks meh, and chatter suggests cold starts are an unsolved problem. Not seeing anything suggesting this is even worth trying
Cloudflare
Technically this solution is vastly superior to the rest. No cold starts, easy integration with KV storage, global distribution.
The major gotcha is the incredibly limited execution environment (basically JS), super short execution time maximums and the pricing model. It doesn't scale up gradually like GCP...once you start hitting limits you're on paid plan ($5).
Oracle
Very odd...they'll give you ~6 free VMs and 24GB mem forever on an unpaid account. Very generous. Cloud function? Nope...nada...nothing without a credit card. There is a free allowance...but you can't get it without a card. Free allowance looks like the same 400k gb/s everyone else is offering too. meh.
Alibaba Cloud
Looks to cover about ~1 function continuous running in free allowance. But the per GB/s pricing outside of that is like 6x GCP. And I don't see mention of a min number of instances so not sure if avoiding cold starts is even possible. Overall doesn't look attractive.
Azure
Very confusing pricing. e.g. for consumption model it mentions it bills by vcpu/s but doesn't list the price. If similar to premium model then its crazy expensive. Nope....
AWS
Looks cheaper compared to rest on a GB/s basis and billing is 1ms not 100ms like the rest. Also require keeping functions warm so ends up in similar territory to GCP...basically 1 free warm function. Unlike GCP there is no idle price tier though...so ends up being substantially more expensive for subsequent ones if you want to keep a function warm all month (~4.5 USD). For usage cases where cold starts are acceptable AWS looks excellently priced though.
Comments
To be fair, you also won't get free VMs without a credit card and that's good, fewer abusers
Kinda...I don't currently have a card loaded (either expired or removed don't recall)...and still have access to the VMs. But not free tier functions.
I'm gonna go a bit offtopic here
Cloud functions feel like a scam to get new devs that want to pay nothing while coding without ever needing to learn how to run their own VM, and then eventually when they develop big projects they'll continue using the same platform and get overcharged.
Running your own scalable VM, starting from $5 from any of the big players like Hetzner, DigitalOcean, Vultr, Linode and scaling it up to $50 or even more, and eventually getting a dedicated server is much much cheaper in the long term and also more insightful. You learn how to manage your own VM, you learn how to migrate your projects from one machine to the other, and much more.
Yes you don't spend all your time working on your project, but if you're in your 20s the amount of stuff you'll learn that will help in the next 40 years of your career is priceless. And if eventually one of your projects succeeds you'll be saving a lot compared to cloud functions, if not, you just got a new skillset that you can use to increase the pay at your new job or find a better one.
microLXC has an JSON API for everything, however I never came around to the Point adding static API Key's.
Free NAT KVM | Free NAT LXC
@havoc
I believe I read you mentioning www.fly.io somewhere already, so If hunting for more free resources, you can also check out
https://www.scaleway.com/en/pricing/?tags=serverless
https://www.netlify.com/pricing/
https://vercel.com/pricing
https://glitch.com/
https://www.koyeb.com/
https://railway.app/
(The last three, I guess they are more PaaS rather than FaaS, but I'd say the two have a quite lot in common..)
Have you ever considered the monetary value of one's time? If you put that into account, it's not necessarily so obvious that you're 'getting overcharged' anymore..
Also, it all depends what your potential workload would be: a couple of memory heavy stateful Java apps? Indeed, perhaps not the best match..
But e.g. a lot of small stateless JAMStack apps? Much more so!
Assuming we're talking about some kind of hypothetical money maker and not just a hobby thing*: hate to break it to you, but it wouldn't be all roses
Yes, looking just at the money spent on raw computing resources, you'd be definitely spending less on hosting the whole thing.
But, on the other hand, most importantly, managing the whole thing would be taking up your most precious resource i.e. time (that you could instead use to develop a new feature, deal with an existing bug or perhaps learn a new language/framework), and also, among others, it wouldn't be as highly available (esp. in the case of a single dedi/VM, ideally, you'd have a lot of redundant servers in many datacenters for that...which wouldn't be as cheap anymore :P), and, esp. in the case of a global audience, definitely a lot slower compared to globally distributed cloud functions on the edge a'la CloudFlare
Since you started the post from the perspective of a developer, given my knowledge of the industry (overall trends in IT, and specific job postings I've browsed through), I personally wouldn't bet on VM / dedi kind of sysadmin skills being that important for developers (unless you're planning a career switch to a so called devops position).
Neither now, and even less so in the future, because of the ongoing shift to the cloud and all kinds of managed services (including cloud functions)... So if continuing the dev path, I'd instead focus on leveling up in coding, containers and CI/CD (all of which you'd have a high chance of leveling up in, if messing around cloud functions )
...and also, apart from the time wasted ( ) spent on admin tasks, also be potentially overwhelmed with scaling the whole thing
Since I haven't really scraped and compared a ton of dev jobs to have a broad picture, what I'm about to say is merely anecdata based on both the job offers I've browsed through so far, as well as the kind of person our company is on the lookout: compared to a hypothetical intermediate e.g. Python dev who's also an intermediate Linux sysadmin, my personal impression is that a hypothetical expert Python dev with barely any Linux skills would be in a more advantageous position on the job market, with more better paid positions to choose from.
So as mentioned above, were I a dev, I'd instead try sharpening my existing skills, instead of trying to become a jack of all trades, master of none - your (local/target job market's ) mileage may vary
*Don't get me wrong, I think that VMs/dedis might indeed be the way to go for hobby sites, i.e. if doing stuff on your own simply for fun, since you wouldn't be losing millions even if your site were down, nor would it have to be blazing fast, lest your customers go to your competitor's site instead, etc. - in fact a few other (non-commercial) forums I frequent are hosted on dedis for this precise reason (they provide the best bang for the buck), but since you mentioned money, things are much more complicated..;)
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
GCP has a generous free tier for RAM, but their billing is too complicated and the traffic is too expensive. Also they give twice as much RAM as CPU for free, but I see that their default type will usually consume 1.6 times more CPU than RAM
Apart from what @chimichurri already said, Docker is also a thing, and the big three clouds also allow you to upload a container and have X instances of it running, which doesn't make the skills you mentioned all that relevant. However, I agree with you on the pricing aspect of it.
Made some progress since above post. Specifically, worked out how to deploy rust code to cloudflare workers via WASM. Not a lot of documentation out there on that so not entirely clear on how much of the execution environment restrictions that lifts vs JS. Will need to try. Interestingly Rust seems heavier than JS - at least for trivial hello worlds. Left little spikes are JS, right is Rust/WASM
Indeed. The pricing structures are pretty blatant drug dealer style "first one is free". Truth be told the focus on cost above is more of an optimization game to me than financial actual need to do so (I do finance stuff as day job so this is just me tinkering).
They're certainly a niche tool, but don't think that makes it a scam. Plus I think the overall trend towards ever higher levels of abstraction will continue, along with the big cloud building blocks approach. I know...not a particularly popular view on a site full of VM providers.
Definitely get what you're saying though about the basics - learned a lot about that in a home server env (proxmox etc). There is a certain appeal in keeping it simple. And likely also works out cost efficiently.
After some reading I found that cloudflare memory time is actually much more than 10ms * 100000 *30day * 128MB, because 10ms is just CPU time and each of my requests takes more than 1s of walltime. GCP's CPU time is walltime, which means that most http requests will be over 100ms
Thanks for the links. Yeah that was me re fly.
Yes this actually started with me trying to use the redis labs stuff someone posted in the free stuff thread a day back. Redis labs doesn't have HTTP API access...so need some sort of execution env to open a connection the redis lab servers. Hence down the rabbit hole as to what is the best way to do this.
FYI for anyone keeping score - WASM is a dead end for this. While you can compile arbitrary code and jam it into a CF worker...it doesn't seem to support the necessary network stack stuff - sockets etc. i.e. Rust can do it, but WASM on workers only supports a subset of Rust features.
I agree with everything you said, and agree with the sentiment of this statement too, but it's a bit optimistic to say 40 years. Think back to what the computing landscape looked like 40 years ago. I wouldn't place bets on any of the cloud platforms existing in the state they're in now in 20 years let alone 40.
Also, while I balk at the prices of cloud platforms compared to doing it yourself (especially as I've started my own company), what you're really trading is convenience for money. As you progress in a development career, your time becomes more valuable not less, so if you're working for someone else it's just a simple cost analysis for them - can you provide more benefit to the company by doing more development in the time you could have spent configuring a VM compared to the cost of buying it ready done?
One last link, unfortunately relatively old i.e. this is from Jan 2021, but perhaps will be of some value still: https://mikhail.io/serverless/coldstarts/big3/ - this data suggests you should be fine with AWS cold starts, if your pain threshold is 1s...
The above link suggests that, if trying to avoid cold starts, you should ideally keep the package size as small as possible (so perhaps stick with Go if possible?)
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
I guess it depends on usage case. I'm specifically interested in (ab)using functions to add more interactive content & logic to static sites. I've tested this with cold starts and its just not viable. Static page loading essentially instantly...and then a second+ passes and then the rest painting is very noticable & feels broken. For other cases 1s is probably nothing.
On the plus side, warm GCP functions fetching redislabs.com keyvalue data are ridiculously fast - usually <5ms total execution.