Infrastructure Deployment Automation with Red Hat Ansible & Azure Bicep
Linux and Open Source
The latest updates to configure and manage your Linux workloads in Azure. We recently introduced a new, simpler open source declarative language called Bicep that helps you easily deploy Azure resources. For Red Hat users, we’ll show you the new managed app version of the Red Hat Ansible Automation Platform on Azure, to set up a maintenance-free ansible environment in Azure. Lachie Evenson, Linux and Azure expert, joins Jeremy Chapman to share how we have focused on removing the learning curve to make it easier to configure and manage your workloads in Azure.
00:43 — Linux in Azure
02:08 — Azure Bicep
04:20 — Break down big files using modules
06:52 — Red Hat Ansible
08:52 — Deploy new apps or workloads with Ansible
11:25 — Wrap up
Bicep is available now, learn more at https://aka.ms/bicep
Connect with the community at https://github.com/azure/bicep
Sign up for Red Hat Ansible Automation Platform at https://aka.ms/AnsibleManagedApp
Unfamiliar with Microsoft Mechanics?
We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
- Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries?sub_confirmation=1
- Join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
- Watch or listen via podcast here: https://microsoftmechanics.libsyn.com/website
Keep getting this insider knowledge, join us on social:
- Follow us on Twitter: https://twitter.com/MSFTMechanics
- Follow us on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
- Up next, we take a look at the latest updates for configuring and managing your Linux workloads in Azure starting with a new open source Bicep language that helps you more easily deploy Azure resources. And for Red Hat users, we’ll show you the new managed app version of the Red Hat Ansible Automation Platform to set up a maintenance-free Ansible environment in Azure. So today, I’m joined by Linux and Azure expert Lachie Evenson from the engineering team in Azure, joining us all the way from his home today in San Francisco. Welcome.
- Thanks for having me. It’s great to be here.
- It’s really great to have you on. Now today marks Azure Open Source Day. So it’s a really good opportunity to look more closely at Azure support for Linux, especially with the latest updates. In fact, a lot of people might be surprised to learn that around 60% of compute cores in Azure run Linux workloads.
- That’s right. Linux is really popular on Azure. In fact, we support practically every Linux distribution and have done lots to partner with the open source community to make sure that Azure is the best place to run your Linux workloads. We help you bring your existing workloads or build new ones from scratch and take advantage of managed Azure services, From our databases to Azure Kubernetes Service, Azure Machine Learning, and much more.
- And I know with Red Hat in particular, we make it extremely easy to even bring your existing Linux subscriptions without having to pay again in Azure. So, there’s really a long history then between Azure and Linux, that really spans almost a decade and it’s only getting better, right?
- It absolutely is. We’ve really focused on removing the learning curve when coming to Azure and also making it easier to manage and configure your workloads.
- And as you mentioned with learning curve, you know, one of the things that a lot of people come into Azure have to grapple with, is the Azure Resource Manager template which are authored in JSON and also foundational to Azure. Now they help describe your infrastructure requirements as code, whether that’s things like networking, compute, data or other services for automation. Does that get any easier?
- It does. We now have a new option that helps you get proficient on Azure faster. We recently introduced a newer, simpler open source declarative language called Bicep. And one of the great things it brings is an easier authoring experience, which actually leverages the Azure Resource Manager API under the covers.
- Right, and as a huge deployment fan, I’d love to see this in action.
- Sure, I’ll show you how this works on a complex app deployment. Here is a complex three-tier application, all with standard VMs. There is a web tier, an app tier and a database tier. You’ll also see a jump box and networking configs. I’ll start by showing you what this looks like as an ARM template. Then I’ll show you what it looks like when we convert it to Bicep in Visual Studio code. Here you’re seeing the JSON-based ARM template. Now this is obviously a pretty overwhelming set of code to look at. The entire deployment is all contained within a single file and it’s difficult to read. Now let’s compare this to the same thing using Bicep. So now you’re seeing the ARM template on the left and Bicep on the right. Right away, you’ll notice a few things. First, the syntax of Bicep has a lot less noise. With ARM templates, we embed the ARM template language inside of a JSON file and that’s why there is so many double quotes, commas and square brackets. Bicep on the other hand, feels a lot more like a traditional programming language. You’ll notice in ARM, we had these functions to access parameters and variables, or construct resource IDs. But with Bicep, you can naturally reference these things by name, cutting out a lot of unnecessary syntax. The proof is in the character count. The ARM template has over 36,000 characters, while the Bicep equivalent was done in around 20,000 characters or 40% less code in my case. That translates to code that is easy to read, write and maintain. Bicep is also a lot smarter than ARM templates about things like determining dependencies. Throughout the ARM template, you will see the ‘dependsOn’ property frequently used. In Bicep, we don’t need it. If we look at the VNet resource here, there’s a dependency on a set of network security groups. Bicep is able to determine the dependency throughout these references, which again results in less code and fewer mistakes.
- That said, because all these resources are still being declared in that one single file, it still looks pretty overwhelming.
- It does, but to help with this Bicep makes it really easy. You can break down a big file like this into more manageable pieces using modules. And the great thing here is that they are self-contained components that you can reuse and easily update. Let’s take a look at the same deployment written to use modules. With this new structure, we’ve broken out our networking components like the Virtual Network and NSGs into a dedicated networking.bicep file. And we have vm.bicep that we’re calling a few times here. If I drill into this networking file, you’ll see a much smaller set of parameters and only the nic and VM resources. And this is the business logic that we’ll use to compose our main.bicep entry point. If I open main.bicep, you’ll see it calls a mix of modules and resources. Our networking stack requires just a few parameters. And then, each tier of our application reuses the generic-tier.bicep module. The parameters allow me to quickly configure settings. For example, if I want to scale out my app tier, I can change my VM count parameter from the default of two up to five. And as I make this change, you’ll see that Bicep with VS Code also supports IntelliSense as I go. Of course, this reuse means the net total code we are managing, is greatly reduced. So my main entry point is only 130 lines of code. And each file in the project is small and serves a specific purpose. A lot better than where we started. You could also use Bicep modules in private registry. And in a few months, we’ll set up a public registry with supported Bicep modules, and we’ll also start accepting community contributions as well. Now let’s actually deploy this from VS Code using the Azure CLI. One thing that Bicep and ARM templates do for us prior to the deployment behind the scenes is validate the entire set of resources that are going to be deployed. Compared to Terraform or other scripted methods, these pre-deployment validation checks reduce failures. Once it’s started in the Azure Portal, we can monitor the progress of the deployment until it completes. Of course, depending on what you’re deploying, this can take several minutes. And we’ll see, in our case, that everything completes in about five minutes.
- And to be clear, Bicep works really with any Linux distro and also all the services in Azure. And this is native cloud automation, you know, but many people are coming from Red Hat and using its Ansible Automation Platform for on-prem and hybrid deployments as well.
- Yeah, it’s one of the most popular ways to deploy Linux on Azure. In fact, we already have a rich collection of certified playbooks to use on Azure. What’s new is we will soon have a managed app implementation of the Red Hat Ansible Automation Platform. And this takes care of setting up the Ansible resources for you by reducing the operational burden and keeping it maintained and up-to-date. This way, you can use it to deploy resources in Azure without worrying about the underlying Ansible infrastructure. It just becomes BYOA, or bring your own automation.
- So what does it take then, to get the managed app up and running?
- It’s really easy, especially compared to how you’d typically deploy it on-prem. Let me show you. I’m in the Azure Marketplace and I’ll select the Ansible Automation Platform and get started. Here, you can see there are a handful of basic fields with parameters you need to define. Standard stuff like subscription and resource group, region, how you want to name it, and a couple of usernames and password. That’s it. No servers to deploy and configure as you normally would do on-prem, or using VMs in the cloud. From there, I can kickoff the deployment of the managed service app and that takes a few minutes to complete. And as this deploys, let me explain what happens. It provisions the entire Ansible Automation Platform and its components, from the required clusters and containers in AKS to a Postgres database instance as well as additional Azure services for storage, networking, logging, and encryption. Notice this isn’t using virtual machines. These are all cloud native services. Of course, it configures connectivity between the deployed platform components in the Azure cloud. Once everything is set up, the Red Hat team will have access to just the managed resource group that’s running the platform, so they can assist you with support and keep your Ansible environment running and up-to-date.
- Okay, so now that we have it running, how would you use it then to deploy new apps or workloads in Azure?
- Well, let me show you an example. I want to deploy a fairly complex app again with a front, middle and backend tier. Everything needs to talk with networking connections and we need to make other configurations as part of the process. With Ansible, we can use YAML-based templates for each of the services we need to deploy. And each one has a set of variables or parameters we define and code. Let me show you a few playbook templates as YAML files. This one is to set up a resource group in Azure. And here’s another to provision a Red Hat VM with the right settings. You could also use this with other Linux distros. And here’s another to provision a Windows Server 2022 VM. For our app deployment, I need all of these pieces and more to deploy in the right sequence and some will have dependencies on the completion of the previous steps. So if I head over to my Ansible environment running as a managed app in Azure, first I can see all of these templates and a few more. In fact, to save you time, there is an entire Azure collection you can and access with code samples for most things you’d want to deploy in Azure. And one of the coolest things we can do here is use a workflow job template to stitch together all of the nodes, as they’re called here, in the right sequence with any dependencies called out. If I add one, for example, if I want to use a new resource group instead of an existing one, I can add that node and define what happens if the process succeeds or fails or choose Always, so it runs regardless of success or failure of the previous step. Then I can link it to subsequent tasks. For example, the ones that will use this new resource group as they’re deployed. And now with all of my nodes defined and loaded in with any required variables, I just need to hit Save. And now in the templates view, I can launch the deployment process. And each job, like this one for a RHEL 8 VM has an output so you can see the status of all the playbooks that ran and here I can see my deployment completed successfully.
- And this is great news if you’re coming from Ansible and just want to focus on the automation side of things. You can do that instead of focusing on the underlying infrastructure.
- That’s really the goal with all these updates. It’s really about making the coding experience easier and taking away a lot of the common challenges. You can see that in the form of simplified syntax and modularization from Bicep, as well as making it easier to build and run the Ansible Automation Platform. And both of these examples work perfectly with Linux and also across platforms.
- So for anyone who’s watching and looking to get started with Bicep or the managed app for Ansible Automation Platform, what do you recommend?
- Bicep is available now and you can learn more at aka.ms/bicep. And it’s open source as I mentioned and you can connect with the community at github.com/azure/bicep. For Ansible managed app, we are expanding the private preview signups for the next few months in North America. You can sign up at aka.ms/AnsibleManagedApp.
- Thanks Lachie, for sharing all of the open source updates for automation. Of course, keep checking back to Microsoft Mechanics for the latest tech updates. And don’t forget to subscribe to our channel if you haven’t already. And thank you so much for watching.