In part 1 of our latest Serverless Show, Hillel and Erica Windisch, CTO and Co-Founder, IOpipe discussed the complexity of cloud provider SLAs and the need for robust architecture.

Hillel raised the topic of the announcement that Google GA, Python, and Go on cloud functions. “I want to first ask you about Cloud Run versus Cloud Functions. Obviously, Cloud Functions is your sort of typical, classic, function-as-a-service type offering. Cloud Run is something that’s somewhere in between containers and functions. In your view, is that the future of serverless? Is that a step back, forward, or sideways?”

The Future Is Firecracker Behind Lambda and Fargate

Erica replied, “I think that what Cloud Run is doing is not that different, in a way, from what Firecracker is doing. They’re very different technologies, in a sense. Cloud Run is more the API side. You have Fargate and Lambda on the Amazon side, but the backend of those is now both Firecracker. That may not be the case in all the regions yet, but at least for some regions, and the future is Firecracker behind Lambda and Fargate. Fargate, which is strictly a container platform and Lambda, which is this container-light code-as-a-service platform, but they’re both using containers in the background. Lambda is now basically just spinning up containers and basically doing Cloud Run in the background where you don’t see it.

Good & Bad Things, Lambda Interface Tightly Controlled

“I think that’s kind of interesting. But then Google basically said, ‘Instead of exposing two interfaces, we’re going to expose one interface,’ and I think there are good things and bad things about this. Having the Lambda interface being very tightly controlled and very strict and, ‘This is how you have to build your application,’ creates a forcing function of constraints that forces you to make sure you build your application a certain way, which is really a good thing, in my opinion. Although the custom runtimes kind of blurs that line a little bit. But Cloud Run takes it in another direction, which is, ‘We’re going to single interface to both of these things,’ which is totally possible, and in some ways, kind of enabled by the fact that Amazon did them separately. Now that we realize we can do them together, why don’t we?

“It’s interesting, but it makes me really question what’s the future of Google Cloud Functions? Is it going to be obsolete like Google Functions or are we going to see both of them long term? Are we going to see Cloud Functions go away? I mean Google Cloud has not been afraid of removing and deprecating services. Honestly, some of the services have stayed around maybe even longer than they should.”

Wanting Google to Step Up

Erica continued, “One of the main things we ran into was that their hosted queue services and streaming services were just not nearly as good as Kinesis. We’re clearly just moving everything over to Amazon and Lambda and we’re going to dog food all of our own things, which I’m so happy that we did, but I also want Google to step up and get these things mature.”

Hillel replied, “But you’re saying that part of the stepping up might be to own the fact that Google Cloud Functions was an interesting stepping stone to Google Cloud Run, but now that should go away, and this should be the B paradigm for them.”
Erica replied, “It seems like it’s likely to happen, but who knows? I think it’s too early to really say for sure, but I’m a little hesitant personally on adopting Google Cloud Functions just because I don’t know what the future of that service is.”

You Should Have to Get Your Cloud-Native License

Hillel said, “Yeah. It’s interesting whether they’re confusing the market a little bit too early in terms of what they’re pushing. I wanted to talk about languages in general, and this is linked up bit with what you said earlier about custom runtimes. I have a pet peeve which is A) you should have to get your cloud-native license by writing one production application in Pure Lambda, API Gateway, S3, Kinesis, no Fargate and no custom runtimes, really embrace the whole cloud with all the constraints and get it going because you learn a lot about what’s important and what’s not by being forced into that box.

“I’m not always excited about the fact that people can start bringing custom runtimes and there’s some questions that I have about who owns responsibility now for those runtimes. Is that a step backwards where you’ve now taken that responsibility for patching things? That’s one thing that I’m always concerned about there, and then the other question is do we really need to support 12 or 15 or 50 different languages? Do we need a COBOL runtime for Lambda? How do you see that?”

Erica replied, “I think it can be useful to have some of those things. Honestly, I don’t think a COBOL one is particularly useful, and not to say that I don’t think that running Cobalt on Lambda is interesting. I actually think that it is interesting. I think it could be useful, but you can compile those into shared libraries and import those into C++ code or even create node add-ons or Python C-type extensions that import and use that Cobalt code. It seems really hard for me to believe that people are going to try and be like, ‘Hey, let’s write some brand-new COBOL code to run on Lambda.’ They’re going to be building and integrating this in some way. So, having some glue code written in another language in order to import those as shared libraries, I mean seems reasonable. But then again, I’m not a COBOL programmer.

“From the security angle, now you’re introducing another runtime, another language, another compiler, even if it’s a just-in-time compiler, so there are additional things to worry about. I think something particularly interesting might be new languages that might get developed. Let’s say we got to a future where everything is serverless, but Amazon only controls what runtimes we’re allowed to run or Google defines what runtimes we can run. How does another language ever become popular? You will never have a new server-side language get developed and the speed of innovation of languages and language development would slow down.

Democratizing Server-Side Languages

Erica continued, “I think it could definitely slow innovation and adoption of runtimes if things like Lambda didn’t have a bring-your-own-runtime mechanism. I think overall it’s good, but there’s also the other side of it, which is if you’re building applications, there’s definitely reasons why Debian-stable and Rel are popular on the operating system sides because they are stable operating systems, and you know what you’re getting and you have a very consistent platform for deploying your applications because not everybody wants to be running the very latest versions of these things. You want to have something that you just know is working and stable and also support it and getting security updates and everything else, which is what you’re, in theory, getting with all the Amazon-managed services. If you try and bring your own runtime, well, great. You get the speed of innovation and the bleeding edge, but the security and the updates and everything else are going to be much more DIY.

Hillel replied, “Sure. I get your point about democratizing server-side languages and how Cloud Run and custom runtimes opens that up a little bit so people don’t get stuck in the Amazon kingdom doing what Amazon tells them to, so that’s interesting.”

Read and watch part 3, “Is Serverless Ready for Mission-Critical Apps?”

Share This Article
Share on facebook
Share on linkedin
Share on twitter
Share on email

Join industry experts as they discuss all things serverless including industry news and best practice tips.