Depends on your budget of course. For most hobbyists an RTX3080 and RTX4080 is out of budget. Nevertheless, your RTX3080 will still outperform a RTX4060. I would call it a "high-end GPU" (which is subjective again).
Providers may consider to do the calculations for their end-users. But with >1000 users, they are faced with additional challenges (beyond the GPU itself). e.g. you can't just put 8 RTX3080/RTX4080 GPUs in a single computer. You need enough PCI lanes and an absurd PSU (that may cost as much as a GPU).
Anyway, according to the EULA of Nvidia, cloud providers are not even allowed to use RTX3080 GPUs. What you can get though, are K40 (or K80, which are just twin-K40s). To give you an idea, here are the prices that AWS uses.
Which I would call, absurd prices, given the fact that these GPUs won't even outperform a GTX1080Ti. This shows that even the cheapest cloud GPU servers are expensive.
(Un)fortunately, there are auction websites like vast.ai that allow you to rent consumer grade GPU servers at lower prices. Having used that service for about 2 years, I can tell you that it's unreliable and not worth the pain. Just too many variables.
When I needed GPU servers for my company, I ended up self-hosting them, which also is quite expensive. (think fail-overs, multiple internet providers, smart routers, security, high energy costs). But it's more reliable than anything else I tried.
PS: Having said that, if you do the calculations client-side (on customer premise) you get faced with 50 different types of consumer GPUs, and you are going to disappoint some customers. e.g. CUDA won't work on an AMD GPU. GPU driver versions start to matter, which is something you can't ship with your software. Bigger networks perform better but won't fit in the memory of smaller GPU. You won't be able to run it on a tablet, and laptops will suffer from cooling issues. - All of which happened before, with other AI software.