TensorDock: Affordable, Easy, Hourly Cloud GPUs From $0.32/hour | Free $15 Credit!
Looking for an alternative to big, expensive cloud providers who are fleecing you of money when it comes to cloud GPUs? Meet TensorDock.
TensorDock
We're a small, close-knit startup based in Connecticut that sells virtual machines with a dedicated GPUs attached. Our goal is not to make money. Rather, our primary goal is to democratize large-scale high-performance computing (HPC) and make it accessible for everyday developers.
Why TensorDock?
1. Ridiculously Easy
Your time is money, so we've tried to make your life as easy as possible. We built our own panel, designed for the GPU use case. No WHMCS here. We did things our way. We have an API too.
When you deploy a Linux server, NVIDIA drivers, Docker, NVIDIA-Docker2, CUDA toolkit, and other basic software packages are preinstalled. For Windows, we include Parsec.
2. Ridiculously Cheap
The cheapest VM that you can launch is a $0.32/hour Quadro RTX 4000 + 2 vCPUs + 4 GB of RAM and 100 GB of NVMe storage. If you are running an hourly GPU instance at another provider, check our pricing, and you'll save by switching to us. If you can commit long term, we can give discounts up to 40%, sometimes 60% or higher.
Our pricing is very unique. During our experimentation phase, we purchased a ton of different servers and ended up with a heterogeneous fleet of servers. So, we decided to charge per-resource. Customers are rewarded for choosing the smallest amount of CPU/RAM, and they'll be placed on the smallest host node available. Select your preferred GPU and other configurations and you'll only be billed for what you are allocated. It's that simple.
If you are training an ML model for 5 hours on an 4x NVIDIA A5000, it'll cost you less than $20
3. Live GPU Stock
As of this very moment, we have over 1000 GPUs in-stock, with another 5000 GPUs available through reservation, where you contact us and then we tell our partner cloud providers to install our host node software stack on their idle GPUs. We can handle your computing needs, no matter how large.
The details
Because we charge per-resource, just check out our pricing:
https://tensordock.com/pricing
You can register here:
https://console.tensordock.com/register
And then deploy a server here:
https://console.tensordock.com/deploy
It's that simple.
The LES Offer
Not everyone needs GPUs, especially on a server forum like LES. So, this is more of a soft launch for us before we go onto other ML-related forums at the start of next year
This is only for LES users with at least 5 thanks, 5 posts/comments. If you already claimed the signup bonus on LET, unfortunately you can't claim a second one
$5 in account credit for registering and posting your user ID
Register: https://console.tensordock.com/register
User ID: https://console.tensordock.com/home (find it under the "Your Profile" box)
Then, post:
Cloud GPUs at https://tensordock.com/, ID [Your User ID]
E.g. if your user ID was recbob0gcd
, you'd post:
Cloud GPUs at https://tensordock.com/, ID recbob0gcd
Additional $10 in account credit for creating a server & giving feedback
Once we've given you $5 in account credit, go create a GPU server and give us some feedback on the experience. 2 sentences please! Again, post your user ID with this comment, and we'll give you an additional $10 in account credit. Bonus if you try using our API
Goal is to get some feedback to improve the product before we go bigger
~ Mark & Richard
Website: https://tensordock.com/
Contact: https://tensordock.com/contact
Questions? Feel free to ask within this thread.
Comments
Looks awesome! Good luck, mate
Ympker's VPN LTD Comparison, Uptime.is, Ympker's GitHub.
An exotic LES offering! I like it on that basis alone!
For something named after google's tensorflow I do think you'll need to add GCP pricing to the industry comparison tab though...
If you've got 1000 GPU in stock you could consider soaking up some of that capacity via spot pricing? Your dedicated pricing seems vaguely competitive with google's spot so presumably going spot to could leapfrog gcp. idk...just speculating here
Hey @lentro! Way cool! Congrats! Best of luck!
MetalVPS
Finally someone utilizes GPUs for something better than buttcoin mining.
What do you think people will spend their free credits on?
Head Janitor @ LES • About • Rules • Support • Donate
@lentro - looks like an awesome project! Best of luck with everything and the console/dashboard looks super nice!
Head Janitor @ LES • About • Rules • Support • Donate
Haha, thanks for the support everyone! Happy to give starting credits to anyone
Haha, it's actually named after the Tensor core in general. The tensor core is the type of core that ML uses. It's much faster than normal NVIDIA GPUs' CUDA cores. So our idea was like this is a "dock" for people to load up on tensor computing power for ML needs
https://www.nvidia.com/en-us/data-center/tensor-cores/
Actually we don't own all the GPUs. Last month, we got 6 figures in money to play with, which is a few hundred GPUs. The rest are with other companies that we're partnering with. Essentially, we can share capacity to handle surges together. As of right now, around half of our own owned GPUs are being used, the rest idle mining (which by itself is enough to pay back the initial investment cost). My goal is to get this up to 100%, of course, but for now utilization is OK. We will definitely consider adding spot instances, but probably in at least half a year's time given demand is good enough right now
LMAO!!!
Actually earlier last year I watched very closely gpu.land: https://www.producthunt.com/posts/gpu-land
From what I could tell, their maker "Ilja Moi," isn't actually real but got $50k in AWS credits or something, and he was reselling AWS credits to ML developers, turning $50k in AWS credits into like $20k real cash. Nowadays AWS blocks crypto miners (an insider told me they inspect packets, e.g. a constant stream of small hashes every second to a mining farm pool IP and port 3333 will probably get you terminated), so I think this was a creative way of stealing money from AWS. No idea though
But don't get me wrong, we idle mine (Eth on GPUs, Filecoin/Chia/Storj on HDDs, etc...), so 24x7 compute utilization of all resources so we are operationally profitable even with 0 customers (whether we make enough to get a return on our investment is another question). So @Janevski maybe you should still hate me
Thanks! Really glad to see all the support from LES!
I'm feeling like in a week or two the project will be ready to be posted on Google and start advertising to real ML users! Right now over 1 dozen VMs from LES/LET users, getting a lot of good feedback!
Cloud GPUs at https://tensordock.com/, ID recqcevltm
Too bad its only available in US locations.
OnePoundEmail (aff link)
Congratulations on being the first, check your account
Surprised that people on LES like to give feedback, and LET has many more people who want the credits
US is the cheapest for power and bandwidth at the moment. We're thinking about maybe Europe and Asia, but those are long term. Where would you like us to be?
Somewhere in Asia, maybe Singapore or Japan. I use GPU server every 2-3 months for 12 hours, 5 days (to play browser games so RTX 4000 is overkill but eh). Will try the server probably today or tomorrow.
OnePoundEmail (aff link)
Interesting, probably not Asia for a while due to the higher costs there... But in any case, let me know how the server goes!
Only Ubuntu 18.04, 20.04 and Windows 10 BYOL for default OS selection, probably can ask support to install your own OS preference.
Server Management "Panel":
Overview :
Networking :
Billing :
Actions :
YABS :
I like their low balance notification alert but it can't be customized.
Alert Schedule :
You can also withdraw your balance to your original payment method minus fee.
Withdrawal :
Server works just fine (install driver, etc.). Didn't see any resources (cpu/disk/ram) upgrade in the panel, might be useful for some people.
Only Stripe for deposit, no PayPal.
Can't change any profile details (email, password, etc.).
48h standard email support or 15 minutes video call with a representative (which is nice).
Available Support :
sorry for the plain "feedback" @lentro , I'm not used to this kind of things
OnePoundEmail (aff link)
Now we know where to find someone when we feel lonely.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Do you allow mining?
From OGF
OnePoundEmail (aff link)
LOL yes. I didn't want a phone number but I also know how shady a company with only an email address might seem, so I set up a calendly where you can schedule a call at least a day in advance with us so we aren't surprised when someone wants to chat
Sure, any OS that supports cloud-init is fair game. Tbh I haven't seen anyone run machine learning on CentOS so I didn't really see the need.
Agreed, we'll look into adding support for this! Probably in the next few months or so, the database would need a bit of reworking to support this (as we'd need multiple transactions for a single server, one transaction for each hardware configuration that the server has been provisioned in).
Thanks for the feedback! Check your account for feedback credits
Fascinating. Never thought I'd find this on LES....
Signed up, small issue, but first thing I see is this:
Where'd you pull the performance metrics from? Did you run them yourself For the A/V100s, looks like you have fp16 numbers, not tf32 (See: https://lambdalabs.com/blog/nvidia-a100-gpu-deep-learning-benchmarks-and-architectural-overview/)
Might be worth looking into standardizing metrics for all GPUs - if memory serves my correctly, AX000 adds tensor cores for FP16, so it should be at least nominally higher than what you have now.
Second - email went to spam. Might want to look at your DKIM setup?
Some OS templates might be a good idea too. Don't make me install jupyter, tf/pyt and wrangle with CUDA myself
Spun up a server, and I swear, I only clicked the button once, but two servers were provisioned:
Maybe disable the button after it has been clicked.
Otherwise, it's very nice. Run as expected of course. Personally, I stick with TPUs when possible because they are generally still faster + give more flexibility with VRAM, but you have a very competitive platform.
A way to resize instance / add compute + GPUs would be great.
Good luck!
Haha thanks for checking it out! As you can see, lots of improvements we can make before we launch on ML forums.
I added some additional credits to your account as a thanks for your feedback!
For the GPU metrics, we pulled the numbers from NVIDIA's data sheets (with sparsity):
https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf
The numbers we got are next to the "Tensor Float 32" on the right table. We'll probably figure out our own benchmarks for real-world performance like Lambda, as it's hard to compare the different GPUs. The A5000, for example, is our best deal for price-to-performance, I believe, even for machine learning, but the current deploy page doesn't really communicate that.
Will look into this, thanks! Gmail seems to go to inbox, and Microsoft to spam, so probably some misconfiguration, good catch!
Agreed, will do ahead of the actual "full" launch
It takes 2 seconds for the API to respond confirming the deployment request, so definitely great suggestion with disabling the button after it's clicked so it never gets triggered twice.
Overall, thanks so much for your feedback! I looked into TPUs but the only one I could find (https://iot.asus.com/products/AI-accelerator/AI-Accelerator-PCIe-Card/) was just too bad, and TPUs can't idle mine so it's not feasible for a startup that needs to maximize revenue. A year ago, I chatted with a Chinese tech giant who now makes AI chips, but given the government sanctions and an inability to market these, I gave up pursuing that. For now, the big clouds have an oligopoly when it comes to specialized hardware. In any case though, I hope the prices are low enough that the price-to-performance ratio is comparable to TPUs and that you can use us for rendering if you ever need to do that in the future
>
I thought so - sparsity is performance probably not the best measure to use. It is nuanced to certain models/setups. FP16 would be a better choice probably.
Ya, GPUs are the AI equivalent to of ASICs for miners. They do one specialized task but do it well. You won't be able to idle mine with these and you are pretty much limited to ML/AI/Tensor based ops. No rendering, gaming, etc.
Very cool! Good luck!
Cloud GPUs at https://tensordock.com/, ID recdvuwbom
Whoops, sorry for the late reply! Check your account now
Cloud GPUs at https://tensordock.com/, ID
recqatgsdc
I’m a simple man I see gifs, I press thanks
Done, check your account!
Reposting from over yonder, in case somebody else finds it useful
Also, our official API can be found here:
https://documenter.getpostman.com/view/10732984/UVC3j7Kz
If anyone want to do something cool with it
Missing command:
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.