[{"content":"Had a 1:1 last week. Nothing unusual, just a regular conversation, catching up on things. But there was one moment that stayed with me after the call ended.\nAt some point he said he usually thinks about people in four different ways. Not as an official framework or anything like that, just how he personally sees it. And I don\u0026rsquo;t know, the way he explained it felt simple, but it kind of stuck.\nManaging performance He said the first one is when someone is still managing performance. That\u0026rsquo;s when the manager needs to stay close, making sure things are on track, helping, correcting. It\u0026rsquo;s not even necessarily about lack of capability, sometimes it\u0026rsquo;s just inconsistency.\nThe question here is pretty basic. Can this person be trusted to handle the fundamentals without things slipping?\nDoing your job Then there\u0026rsquo;s the second type, people who just do their job. They deliver what\u0026rsquo;s assigned, they don\u0026rsquo;t create problems, everything works.\nBut it kind of stops there.\nNo real stretch, no extra ownership, no influence beyond what\u0026rsquo;s expected. Just consistent execution within the scope. I think a lot of people stay here for a long time without even realizing.\nHigh performance individual Then he said the third one is high performance individuals, and that\u0026rsquo;s where he sees me today. Which, yeah, felt good to hear.\nThis is where you take ownership, you go beyond what\u0026rsquo;s asked, you solve problems, you deliver. People trust you to execute and to get things done.\nBut even here, the impact is still very tied to you doing the work.\nBecoming a partner Then he paused a bit and said the fourth level is becoming a partner.\nAnd that\u0026rsquo;s where the shift happens.\nIt\u0026rsquo;s not about doing more work or being busier. It\u0026rsquo;s more about how people start to rely on you differently. Not just to execute, but to think with them. To shape decisions, to anticipate what\u0026rsquo;s coming, to say \u0026ldquo;this is what we should do\u0026rdquo; instead of just \u0026ldquo;I\u0026rsquo;ll take care of it.\u0026rdquo;\nIt\u0026rsquo;s a different kind of trust.\nSay–Do ratio Right after that he mentioned something else, which at first sounded very simple. He called it the say–do ratio.\nBasically how often what you say you\u0026rsquo;re going to do actually happens.\nSounds obvious, but the more I thought about it, the more I realized how much this connects with everything else he said.\nAt some point, especially as you grow, people stop tracking your work. They don\u0026rsquo;t follow up on every little thing, they don\u0026rsquo;t check again, they don\u0026rsquo;t remind you. They just assume it\u0026rsquo;s handled.\nIf you say you\u0026rsquo;ll do something, that\u0026rsquo;s enough.\n\u0026ldquo;Ricardo said he\u0026rsquo;ll take care of it, we\u0026rsquo;re good.\u0026rdquo;\nAnd that only works if your say–do ratio is solid.\nIf it\u0026rsquo;s not, people don\u0026rsquo;t necessarily call it out directly. They just start adding layers without saying much. More follow-ups, more check-ins, more control around things.\nNot because you\u0026rsquo;re not capable.\nBut because you\u0026rsquo;re not predictable.\nWhat stayed with me I think that\u0026rsquo;s what really clicked for me.\nThe gap between being a high performer and becoming a partner is probably less about doing more and more about being someone people can rely on without thinking twice.\nSomeone whose word is enough.\nIt made me reflect on small things. How many times I say \u0026ldquo;I\u0026rsquo;ll take a look\u0026rdquo; or \u0026ldquo;I\u0026rsquo;ll get back to you\u0026rdquo; without being clear on when. Or committing to things a bit loosely, thinking I\u0026rsquo;ll figure it out later.\nThose small moments probably add up more than we realize.\nAnyway, nothing super complex. But one of those ideas that kind of changes how you look at your own day-to-day.\nI\u0026rsquo;m definitely going to pay more attention to what I commit to and how I follow through.\nBecause maybe that\u0026rsquo;s the real difference.\nNot doing more, but being someone people don\u0026rsquo;t need to follow up with.\n","permalink":"https://rmmartins.com/2026/03/24/something-my-manager-said-today-that-stayed-with-me/","summary":"\u003cp\u003eHad a 1:1 last week. Nothing unusual, just a regular conversation, catching up on things. But there was one moment that stayed with me after the call ended.\u003c/p\u003e\n\u003cp\u003eAt some point he said he usually thinks about people in four different ways. Not as an official framework or anything like that, just how he personally sees it. And I don\u0026rsquo;t know, the way he explained it felt simple, but it kind of stuck.\u003c/p\u003e","title":"Something My Manager Said Today That Stayed With Me"},{"content":"Over the past year, I\u0026rsquo;ve been working closely with some of the bigger and more visible AI customers in Microsoft\u0026rsquo;s ecosystem. Large platforms. Fast-moving teams. High expectations. High stakes.\nOn paper, that kind of visibility sounds exciting. In reality, it comes with a weight that\u0026rsquo;s hard to explain unless you\u0026rsquo;ve been there.\nBecause being close to impact also means being close to consequence.\nVisibility changes everything When you work with smaller teams or early-stage projects, mistakes are usually contained. You can recover. You can explain. You can iterate.\nWhen you work with high-visibility AI customers, nothing is really small.\nA single design decision can affect:\nThousands of engineers downstream. Millions of end users. Public narratives around safety, reliability, and trust. Strategic bets measured in years, not quarters. The room feels different.\nConversations slow down. Words are chosen more carefully. Silence starts to carry meaning.\nVisibility doesn\u0026rsquo;t just magnify success. It magnifies uncertainty.\nTrade-offs stop being abstract Earlier in our careers, most trade-offs are technical.\nPerformance versus simplicity. Speed versus correctness. Cost versus convenience.\nAt this level, trade-offs become organizational. And human.\nYou start weighing things like:\nTime to market versus long-term trust. Capability versus controllability. Innovation versus operational risk. Transparency versus competitive exposure. There is rarely a clean answer. Most of the time, there isn\u0026rsquo;t one. Only options with different failure modes.\nWhat I\u0026rsquo;ve learned is that senior decision-making is not really about finding the \u0026ldquo;right\u0026rdquo; answer. It\u0026rsquo;s about choosing which risk you\u0026rsquo;re willing to own. And living with it.\nBeing in rooms you didn\u0026rsquo;t imagine entering One of the quiet surprises of this work has been the rooms it puts you in.\nGMs. CVPs. Distinguished Engineers. CEOs and CTOs.\nPeople who have built companies, shaped platforms, and influenced entire industries.\nWhat surprised me most was not their intelligence. It was their restraint.\nThe best leaders I\u0026rsquo;ve met ask fewer questions, not more. They listen longer than they speak. They are very aware of second-order effects.\nAnd despite their credentials, many of them are still actively learning. That realization recalibrates your ego very quickly.\nThe humbling part I\u0026rsquo;ve interacted with people from places like Harvard and Stanford. People whose resumes read like blueprints for success.\nWhat stands out is not pedigree. It\u0026rsquo;s clarity of thought.\nThey are comfortable saying:\n\u0026ldquo;I don\u0026rsquo;t know.\u0026rdquo; \u0026ldquo;What are we missing?\u0026rdquo; \u0026ldquo;What breaks if this scales ten times?\u0026rdquo; That kind of humility is not insecurity. It\u0026rsquo;s discipline.\nIt taught me that intellectual confidence is not about having answers. It\u0026rsquo;s about being honest about uncertainty.\nResponsibility changes how you show up When your work touches high-impact AI systems, you stop optimizing for personal brilliance.\nYou start optimizing for:\nClarity. Repeatability. Safety margins. The ability for others to reason about your decisions. You think more about things like:\nHow this will be interpreted. Who will inherit this system. What happens at three in the morning when something goes wrong. Doing this work as an immigrant, in a language that is not my first, adds another layer of care. You slow down even more. You double-check yourself. You choose words with intention.\nResponsibility has a way of sanding down sharp edges. It forces you to trade cleverness for reliability.\nLearning accelerates under pressure I\u0026rsquo;ve learned more in this phase than in years of steady growth.\nNot because the problems are always harder, although many of them are, but because the feedback loop is unforgiving.\nAmbiguity shows up fast. Assumptions get challenged immediately. Hand-waving doesn\u0026rsquo;t survive contact with scale.\nThe learning is not linear. It\u0026rsquo;s layered.\nTechnical depth compounds with organizational awareness, communication, and judgment.\nYou start seeing systems less as code, and more as reflections of people, incentives, and constraints.\nA quiet shift in ambition This experience has changed how I think about growth.\nEarlier, growth meant:\nBigger scope. More visibility. Harder problems. Now, growth feels more like:\nBetter questions. Fewer unnecessary moves. Decisions that age well. Impact is no longer about being the loudest voice in the room. It\u0026rsquo;s about making the room calmer after you speak.\nClosing thought Working close to power in the age of AI is not glamorous in the way people imagine. It\u0026rsquo;s demanding, humbling, and often uncomfortable.\nBut it\u0026rsquo;s also deeply formative.\nIt teaches you that the real work is not building impressive systems. It\u0026rsquo;s building systems people can trust, including the people who will inherit them later.\nAnd that responsibility, more than visibility, is what stays with you.\n","permalink":"https://rmmartins.com/2026/02/09/responsibility-trade-offs-and-learning-at-ai-scale/","summary":"\u003cp\u003eOver the past year, I\u0026rsquo;ve been working closely with some of the bigger and more visible AI customers in Microsoft\u0026rsquo;s ecosystem.\nLarge platforms. Fast-moving teams. High expectations. High stakes.\u003c/p\u003e\n\u003cp\u003eOn paper, that kind of visibility sounds exciting.\nIn reality, it comes with a weight that\u0026rsquo;s hard to explain unless you\u0026rsquo;ve been there.\u003c/p\u003e\n\u003cp\u003eBecause being close to impact also means being close to consequence.\u003c/p\u003e\n\u003ch2 id=\"visibility-changes-everything\"\u003eVisibility changes everything\u003c/h2\u003e\n\u003cp\u003eWhen you work with smaller teams or early-stage projects, mistakes are usually contained.\nYou can recover. You can explain. You can iterate.\u003c/p\u003e","title":"Responsibility, Trade-Offs, and Learning at AI Scale"},{"content":"\u0026hellip;and why that\u0026rsquo;s not an accident. If you have worked with both Azure and AWS long enough, you have probably felt it.\nAWS feels straightforward. Azure feels… heavier.\nNot worse. Not broken. Just harder to reason about.\nThe console feels denser. The mental model feels less obvious. The number of \u0026ldquo;extra\u0026rdquo; concepts feels higher.\nThis is not a beginner problem. Senior engineers feel it too.\nAnd the most interesting part is this: that friction is not accidental.\nThe common explanation. And why it\u0026rsquo;s incomplete The usual explanation goes something like this:\n\u0026ldquo;AWS was built for developers. Azure was built for enterprises.\u0026rdquo;\nThere is truth there, but it is shallow truth.\nIt explains who the platforms were designed for, but not why the experience feels fundamentally different once systems grow beyond a certain size.\nTo understand that, you have to look at what each platform optimizes for at its core.\nAWS optimizes for primitives AWS is built around simple, composable primitives.\nYou get:\nAn identity system. A network. Compute. Storage. APIs that mostly do one thing well. The platform assumes that you, the customer, will assemble these primitives into systems.\nThis leads to a few consequences:\nServices feel decoupled. The learning curve is front-loaded. Once you \u0026ldquo;get it\u0026rdquo;, patterns repeat. The platform rarely interferes with your design choices. This is why AWS often feels intuitive to engineers with strong systems backgrounds. It stays out of the way.\nThe cost of this approach is that you own more of the architecture. AWS gives you tools, not guardrails.\nAzure optimizes for systems, not components Azure makes a very different assumption.\nAzure assumes that:\nIdentity is central, not optional. Governance is not a later concern. Enterprises will need control before they need speed. Integration matters more than purity. This is why Azure introduces concepts earlier that AWS postpones or leaves optional:\nManagement groups. Role inheritance. Policy enforcement. Resource-level RBAC. Tight coupling with identity and compliance. From a distance, this looks like complexity. From close up, it is intentional structure.\nAzure is opinionated about how large organizations should operate.\nThe real reason Azure feels harder Azure feels harder because it forces decisions earlier.\nAWS often lets you postpone decisions:\nGovernance can come later. Identity models can evolve organically. Network design can be refactored gradually. Azure pushes those questions to the front:\nWho owns this? Who can change this? Under which policy does this resource exist? How does this align with the directory? This is uncomfortable, especially for teams that want to move fast.\nBut there is a trade-off hiding here.\nComplexity vs ambiguity AWS reduces friction by allowing ambiguity.\nAzure reduces ambiguity by introducing complexity.\nNeither approach is inherently better. They optimize for different failure modes.\nIn AWS, teams often struggle later with:\nSprawling accounts. Inconsistent IAM models. Security controls added retroactively. Cost visibility that arrives too late. In Azure, teams struggle earlier with:\nToo many concepts at once. Heavier initial design work. More \u0026ldquo;why do I need this?\u0026rdquo; moments. What Azure does is make organizational complexity visible.\nAnd that is why it feels harder.\nWhy this matters at scale At small scale, AWS often feels faster.\nAt large scale, Azure often feels safer.\nNot because Azure is magically more secure, but because its model aligns more naturally with how large organizations actually function:\nCentral identity. Clear ownership. Policy as a first-class concern. Strong boundaries between teams. This is also why Azure adoption accelerates once companies reach a certain size. The pain shifts from \u0026ldquo;this is too complex\u0026rdquo; to \u0026ldquo;this is preventing chaos\u0026rdquo;.\nThat transition is rarely smooth, but it is predictable.\nAzure\u0026rsquo;s biggest mistake. And AWS\u0026rsquo;s biggest risk Azure\u0026rsquo;s biggest mistake is that it rarely explains why its model exists.\nDocumentation tells you what to configure, not why it matters. This makes complexity feel arbitrary instead of purposeful.\nAWS\u0026rsquo;s biggest risk is the opposite.\nIts flexibility can mask structural problems until they are expensive to fix. By the time governance becomes urgent, the system is already entrenched.\nNeither platform solves this perfectly. But understanding the trade-off changes how you approach both.\nThe maturity lens Here is the pattern I see repeatedly.\nEarly-stage teams prefer AWS. Growing teams struggle with both. Mature enterprises often settle more comfortably into Azure. Not because Azure is simpler. But because its constraints match their reality.\nCloud maturity is not about which platform you choose. It is about recognizing when friction is a signal, not a flaw.\nSometimes friction is telling you:\nYou need clearer ownership. You need stronger boundaries. You need to stop optimizing for speed alone. A reframing that helps Instead of asking:\n\u0026ldquo;Why is Azure so complicated?\u0026rdquo;\nA more useful question is:\n\u0026ldquo;What organizational problem is this trying to surface?\u0026rdquo;\nSeen through that lens:\nAzure feels less arbitrary. AWS feels less forgiving. And architectural decisions become clearer. You stop fighting the platform and start using it intentionally.\nThe takeaway Azure feels harder than AWS because it asks harder questions earlier.\nQuestions about identity. About governance. About ownership. About responsibility.\nThose questions are unavoidable at scale. Azure simply refuses to let you ignore them.\nThat does not make Azure better. It makes it honest about complexity.\nAnd in real systems, honesty usually hurts before it helps.\n","permalink":"https://rmmartins.com/2026/02/03/why-azure-feels-harder-than-aws/","summary":"\u003ch2 id=\"and-why-thats-not-an-accident\"\u003e\u0026hellip;and why that\u0026rsquo;s not an accident.\u003c/h2\u003e\n\u003cp\u003eIf you have worked with both Azure and AWS long enough, you have probably felt it.\u003c/p\u003e\n\u003cp\u003eAWS feels straightforward.\nAzure feels… heavier.\u003c/p\u003e\n\u003cp\u003eNot worse. Not broken. Just harder to reason about.\u003c/p\u003e\n\u003cp\u003eThe console feels denser.\nThe mental model feels less obvious.\nThe number of \u0026ldquo;extra\u0026rdquo; concepts feels higher.\u003c/p\u003e\n\u003cp\u003eThis is not a beginner problem.\nSenior engineers feel it too.\u003c/p\u003e\n\u003cp\u003eAnd the most interesting part is this: \u003cstrong\u003ethat friction is not accidental\u003c/strong\u003e.\u003c/p\u003e","title":"Why Azure Feels Harder Than AWS"},{"content":"For years, \u0026ldquo;cloud-first\u0026rdquo; has been treated as a badge of honor. Companies proudly announce that everything is in the cloud, architects optimize for migrations instead of outcomes, and teams equate progress with how little infrastructure they still own.\nBut after working with dozens of real systems, across different industries and at different scales, one thing becomes clear.\nCloud maturity is not about being 100% cloud. It is about knowing why each workload is where it is.\nThis distinction sounds subtle. In practice, it separates teams that scale calmly from teams that spend their lives reacting to incidents, cost surprises, and architectural regrets.\nThe early phase: Cloud as an escape hatch Most cloud journeys start for good reasons.\nOn-prem environments are rigid. Capacity planning is slow. Procurement cycles are painful. Failures are hard to recover from.\nThe cloud promises elasticity, speed, and a way out of years of accumulated technical debt.\nAt this stage, \u0026ldquo;move everything to the cloud\u0026rdquo; makes sense. You are optimizing for velocity, not elegance.\nReplatforming is often messy, architectures are imperfect, and costs are tolerated as long as things work better than before. This is not immaturity. This is survival mode.\nThe mistake happens when this phase becomes the final destination.\nWhen \u0026ldquo;cloud-first\u0026rdquo; turns into \u0026ldquo;cloud-only\u0026rdquo; Somewhere along the way, cloud adoption becomes ideology.\nTeams stop asking whether a workload belongs in the cloud. They only ask how fast they can move it there.\nSignals of this phase are easy to recognize:\nCost discussions are reactive, not designed. Latency problems are \u0026ldquo;accepted\u0026rdquo; as the price of scale. Complex architectures exist mainly to work around billing models. Engineers know the services, but not the trade-offs. The cloud is still delivering value. But friction is quietly increasing.\nAt this point, many organizations think they are mature because everything runs on managed services. In reality, they have simply outsourced complexity without understanding it.\nWhat mature cloud users do differently Mature cloud organizations behave very differently. Not because they use fewer services, but because they are deliberate.\nThey ask questions like:\nWhat value does the cloud give this workload today? Is elasticity actually being used, or just paid for? Where does latency matter more than convenience? Which costs scale with usage, and which scale with ignorance? And sometimes, the answer is uncomfortable. Sometimes the answer is: this workload does not benefit from the cloud anymore.\nThe uncomfortable truth: Cloud is not always the optimal end state This is where the conversation usually gets emotional.\nCloud is powerful. Cloud is convenient. Cloud unlocked innovation that would have been impossible otherwise. All of that is true.\nIt is also true that:\nData egress can dominate costs at scale. Always-on workloads can be cheaper outside public cloud. Specialized infrastructure can outperform generalized platforms. Regulatory and data gravity concerns do not disappear with abstractions. Mature teams accept this reality without seeing it as failure. They understand that cloud is a tool, not a destination.\nHybrid is not a compromise, it is a signal of maturity Hybrid architectures are often described as transitional. Something you do on the way to \u0026ldquo;full cloud\u0026rdquo;.\nIn practice, hybrid is frequently the end state for mature systems. Not because teams failed to migrate, but because they succeeded in understanding their systems deeply enough to make informed choices.\nA mature hybrid architecture is not accidental. It is intentional.\nCloud where elasticity, managed services, and global reach matter. Dedicated infrastructure where predictability, cost, or performance dominate. Clear interfaces between the two. Ownership of trade-offs, not avoidance of them. This requires more skill than going all-in on cloud. Not less.\nCost optimization is not maturity, cost awareness is One of the biggest misconceptions in cloud discussions is equating cost optimization with maturity.\nReducing bills is good. Understanding why the bill exists is better.\nImmature teams chase discounts, reservations, and short-term savings. Mature teams design architectures that align cost behavior with business behavior.\nThey know which costs are elastic and which are structural. They know which spikes are expected and which indicate design flaws. They know when higher cost is justified and when it is pure waste.\nThis level of awareness does not come from dashboards alone. It comes from architectural clarity.\nObservability as a maturity marker Another reliable signal of cloud maturity is how teams observe their systems.\nImmature environments rely heavily on provider dashboards and default metrics. They notice problems when something breaks or when the invoice arrives.\nMature environments treat observability as a first-class design concern.\nCustom metrics tied to business behavior. Clear ownership of signals. Alerting designed to inform decisions, not create noise. The ability to explain system behavior without guessing. This applies equally to infrastructure, platforms, and AI workloads.\nIf you cannot explain why a system behaves the way it does, you are not mature. Even if it runs in the cloud.\nAI workloads made this painfully obvious AI has accelerated this conversation.\nMany teams discovered very quickly that deploying a model is easy. Running it sustainably is not.\nSuddenly, questions about throughput, latency, cost per request, capacity planning, and hardware constraints matter again. Very familiar infrastructure questions, just with new labels.\nThe teams that struggle most with AI in production are not lacking models. They are lacking infrastructure maturity.\nAI did not create this problem. It exposed it.\nSo what does cloud maturity actually mean? Cloud maturity means:\nYou choose cloud because it fits the problem, not because it fits the narrative. You understand the cost model of your architecture, not just the invoice. You accept trade-offs and design around them consciously. You are comfortable saying \u0026ldquo;this should not be in the cloud\u0026rdquo; when that is true. You can explain your system to another senior engineer without hand-waving. None of this requires being 100% cloud. In fact, insisting on that often indicates the opposite.\nThe real goal The goal was never \u0026ldquo;everything in the cloud\u0026rdquo;. The goal was better systems.\nSystems that scale when needed. Systems that fail gracefully. Systems whose cost behavior matches business reality. Systems you actually understand.\nWhen cloud helps you achieve that, use it fully. When it does not, be confident enough to choose differently.\nThat confidence is what maturity looks like.\n","permalink":"https://rmmartins.com/2026/01/23/cloud-maturity-is-not-about-being-100-cloud/","summary":"\u003cp\u003eFor years, \u0026ldquo;cloud-first\u0026rdquo; has been treated as a badge of honor. Companies proudly announce that everything is in the cloud, architects optimize for migrations instead of outcomes, and teams equate progress with how little infrastructure they still own.\u003c/p\u003e\n\u003cp\u003eBut after working with dozens of real systems, across different industries and at different scales, one thing becomes clear.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eCloud maturity is not about being 100% cloud.\u003c/strong\u003e\nIt is about knowing \u003cem\u003ewhy\u003c/em\u003e each workload is where it is.\u003c/p\u003e","title":"Cloud Maturity Is Not About Being 100% Cloud"},{"content":"\nThroughout my career, I\u0026rsquo;ve had countless conversations about career paths with colleagues, mentees, and professionals in transition. In those talks, I often share practices that have helped me personally along the way.\nBut this article is different. It\u0026rsquo;s transparent and honest. Although I\u0026rsquo;ve spoken about parts of my story with people close to me, this is the first time I\u0026rsquo;m sharing it so openly, including my mistakes, lessons, and reflections in a public space.\nAnd I want to start with a phrase that has guided me for years:\n\u0026ldquo;Badges are temporary. Intellectual capital is permanent.\u0026rdquo;\nThe myth of absolute security For a long time, I believed that working for a big company meant stability. But reality proved otherwise: Google, Meta, Microsoft, Amazon, and many others have all made massive layoffs affecting brilliant professionals in high-impact roles.\nLayoffs aren\u0026rsquo;t always about performance. Often, they\u0026rsquo;re driven by factors beyond our control:\nCorporate strategy: shifts in focus. Market: investment downturns and economic cycles. Efficiency: cost reductions, even in top-performing teams. Job stability doesn\u0026rsquo;t come from a badge. It comes from preparation and that\u0026rsquo;s built outside your job description.\nWhat is intellectual capital? Intellectual capital is the collection of knowledge, experiences, and insights you accumulate throughout your journey. It\u0026rsquo;s what you know, how you think, and the way you apply that knowledge to create value.\nIt\u0026rsquo;s portable: it moves with you wherever you go. It\u0026rsquo;s scalable: it grows when shared and externalized.\nShyness as a starting point When I was younger, I was extremely shy. That made it hard for me to speak up, but it strengthened my writing. What once seemed like a limitation ended up shaping my entire career.\nWriting became my way of thinking, organizing ideas, and sharing them. Later, that skill became the foundation for everything: blogging, personal branding, portfolio building, recognition.\nOften, your limitation is also your opportunity. If you don\u0026rsquo;t like public speaking, write. If you don\u0026rsquo;t like writing, make videos. What matters is externalizing your knowledge.\nTV Globo: Discipline and documentation At Globo, I worked in rotating shifts supporting critical 24×7 systems. That\u0026rsquo;s where I learned one habit that I\u0026rsquo;ve carried through life: document everything.\nI had to leave detailed shift reports for the next person. I\u0026rsquo;d stop up to an hour before my shift ended just to write them properly.\nThat routine taught me:\nClarity in written communication. Documentation as a tool for continuity. The importance of leaving work bigger than the individual. That practice became the essence of what we now call the brag document.\n📌 The brag document emerged in Silicon Valley as a \u0026ldquo;professional logbook\u0026rdquo;, a personal document where you record projects, achievements, learnings, and impact. It helps with performance reviews, resume updates, and interviews. In short: it\u0026rsquo;s career insurance.\n2005–2008: The first blogs and the birth of a domain My technical writing started simply: on a WordPress.com blog. I documented scripts, labs, and discoveries that helped me day to day and might help others too.\nIn 2008, when individuals could finally register their own domains in Brazil, I registered ricardomartins.com.br. From that point on, I started organizing my knowledge in my own space.\nWithout realizing it, I was building a public portfolio. What began as personal notes evolved into something useful to others, and later became the foundation of my personal brand.\nTakeaways:\nDon\u0026rsquo;t wait to have \u0026ldquo;big things\u0026rdquo; to start. Use what\u0026rsquo;s available (a simple blog is enough). Consistency and externalization matter more than perfection. 2012: Peixe Urbano and the cloud boom In 2012, I joined Peixe Urbano, the first AWS customer in Brazil. That was my first contact with cloud computing, and probably the time I grew the most in my career.\nI learned in months what had taken years before. As a result, I started writing even more articles, tutorials, reflections. It was a virtuous cycle: the more I learned, the more I wrote, the more I solidified my knowledge.\nLessons for today:\nMoments of disruption (like cloud in 2012 or AI in 2023) create huge opportunities. Those who engage early in these cycles learn faster and gain visibility. Growth depends not just on time, but on learning intensity. 2015: Bemobi and the power of personal branding At Bemobi, I had a turning point. An AWS Solution Architect (Felipe Garcia) heard my name in a meeting and said:\n\u0026ldquo;Oh, you\u0026rsquo;re Ricardo from the blog!\u0026rdquo;\nThat moment was a huge validation. I realized that the blog I\u0026rsquo;d been maintaining since 2005 was making me recognizable in the market. It wasn\u0026rsquo;t self-promotion, it was branding in practice.\nThe 3 pillars of personal branding:\nClarity: Define what you want to be known for. Consistency: Show up regularly. Credibility: Demonstrate real impact. Networking: when doors open The story didn\u0026rsquo;t end at Bemobi. A few months later, Felipe moved to Microsoft and when a position opened up, he referred me.\nAt first, I was surprised. I didn\u0026rsquo;t think I had the \u0026ldquo;profile\u0026rdquo; to join a company like Microsoft. But it was exactly what I\u0026rsquo;d built outside the badge: my blog, writing, and personal brand that opened the door.\nThree layers of strategic networking:\nDirect collaborators: colleagues and former colleagues. Technical communities: meetups, open source, user groups. Opportunity connections: people who find you through your content. Networking isn\u0026rsquo;t about asking for favors, it\u0026rsquo;s about being remembered for what you contribute.\nFinancial and emotional readiness A layoff can shake both your bank account and your self-esteem. That\u0026rsquo;s why you need protection in two areas:\nFinancial:\nKeep an emergency fund (3–6 months). Avoid consumer debt. Have simple, low-risk investments for security. Emotional:\nDon\u0026rsquo;t confuse your identity with your badge. (See this article I wrote sometime ago) Seek support, mentorship, therapy, community. View transitions as part of the journey, not as failure. Ask yourself: How many months could I sustain myself today without income?\nContinuous learning What\u0026rsquo;s kept me relevant is never stopping learning. From Linux to Kubernetes, on-premises to the cloud, there\u0026rsquo;s always a next step.\nA practical approach:\nBe T-shaped: deep in one area, broad in several. Balance soft and hard skills. Learn for the market, not just your current role. Ask yourself: What\u0026rsquo;s tomorrow\u0026rsquo;s skill that I should start learning today?\nMaster the fundamentals, because everything else will change There\u0026rsquo;s a phrase that summarizes my philosophy about learning:\n\u0026ldquo;Learn the fundamentals. The rest will change anyway.\u0026rdquo;\nReference: \u0026ldquo;Lean the fundamentals\u0026rdquo; – A note to self\nTechnologies, languages, and tools come and go, but fundamentals stay:\nNetworking and operating systems. Data structures and algorithms. Distributed systems and concurrency. Security and design principles. These are what gave me confidence to move between career transitions, from on-prem to cloud, from legacy to distributed systems. Whenever I had to learn something new, fundamentals were the shortcut.\nExercise: Choose 3 core fundamentals in your field and spend 1 hour per week revisiting or applying them in small projects.\nBe ready to tell your story in 5 minutes If someone asked me to summarize my career, I\u0026rsquo;d say:\nShyness made me write. Globo taught me discipline. Peixe Urbano was my learning explosion. The blog became my brand and opened doors. Networking led me to Microsoft. This kind of clear, concise narrative opens doors in interviews, events, and unexpected encounters.\nConclusion: Security comes from preparation My career wasn\u0026rsquo;t built on badges. It was built on the intellectual capital I shared.\nWriting gave me a voice. Documenting gave me discipline. Consistency gave me recognition. Content-based networking opened doors. Mastering fundamentals gave me confidence to adapt. And I want to end by reinforcing the phrase that guided this entire reflection and hopefully, will guide you too:\n\u0026ldquo;Badges are temporary. Intellectual capital is permanent.\u0026rdquo;\nWriting this article is also an act of vulnerability. It\u0026rsquo;s the first time I\u0026rsquo;ve openly shared my journey, the mistakes, wins, and lessons. I do it because I believe honest stories can inspire others to prepare better.\nFinal reflection: If tomorrow you had only 5 minutes to present yourself to a new employer, what would you have to show?\n","permalink":"https://rmmartins.com/2025/10/14/no-one-is-layoff-proof-how-intellectual-capital-can-ne-your-best-protection/","summary":"\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/wp-content/uploads/2025/10/image-1024x683.png\"\u003e\u003c/p\u003e\n\u003cp\u003eThroughout my career, I\u0026rsquo;ve had countless conversations about career paths with colleagues, mentees, and professionals in transition. In those talks, I often share practices that have helped me personally along the way.\u003c/p\u003e\n\u003cp\u003eBut this article is different. It\u0026rsquo;s transparent and honest.\nAlthough I\u0026rsquo;ve spoken about parts of my story with people close to me, this is the first time I\u0026rsquo;m sharing it so openly, including my mistakes, lessons, and reflections in a public space.\u003c/p\u003e","title":"No One Is Layoff-Proof: How Intellectual Capital Can Be Your Best Protection"},{"content":"It\u0026rsquo;s easy, even comforting, to blend in with your badge. We introduce ourselves with our name followed by the company. We join meetings carrying that title. We post with the credibility it gives.\nBut at the end of the day, the company is just where you are, not who you are.\nYou are what you\u0026rsquo;ve built. What you\u0026rsquo;ve learned, and taught. You are the reputation that stands when your name shows up alone. You are the value that stays when the badge is gone.\nThat question hit harder after yesterday\u0026rsquo;s Microsoft layoffs. Brilliant, talented, collaborative people, still impacted by decisions far beyond performance.\nIt\u0026rsquo;s a painful reminder that there\u0026rsquo;s no real stability in the badge.\nWhich brings us to something urgent: Upskilling is no longer a differentiator, it\u0026rsquo;s survival.\nKeep learning, not just tech, but business, strategy, soft skills Build presence and value beyond your title Leave a trace of impact, something that speaks for you Cultivate real connections, people are the real safety net Adapt fast, the market won\u0026rsquo;t wait Uncertain times call for courage, community, and reinvention.\nIf the badge disappears tomorrow, will your name still carry weight? That\u0026rsquo;s the real question.\n","permalink":"https://rmmartins.com/2025/05/14/who-are-you-without-the-companys-last-name/","summary":"\u003cp\u003eIt\u0026rsquo;s easy, even comforting, to blend in with your badge.\nWe introduce ourselves with our name followed by the company.\nWe join meetings carrying that title.\nWe post with the credibility it gives.\u003c/p\u003e\n\u003cp\u003eBut at the end of the day, the company is just where you are, not who you are.\u003c/p\u003e\n\u003cp\u003eYou are what you\u0026rsquo;ve built.\nWhat you\u0026rsquo;ve learned, and taught.\nYou are the reputation that stands when your name shows up alone.\nYou are the value that stays when the badge is gone.\u003c/p\u003e","title":"Who Are You Without the Company's Last Name?"},{"content":"This article was originally published at https://cloud.redhat.com/experts/aro/private-cluster/\nA Quickstart guide to deploying a Private Azure Red Hat OpenShift cluster.\nPrerequisites Azure CLI Obviously you\u0026rsquo;ll need to have an Azure account to configure the CLI against.\nMacOS\nSee Azure Docs for alternative install options.\nInstall Azure CLI using homebrew brew update \u0026amp;\u0026amp; brew install azure-cli Install sshuttle using homebrew brew install sshuttle Linux\nSee Azure Docs for alternative install options.\nImport the Microsoft Keys sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc Add the Microsoft Yum Repository cat \u0026lt;\u0026lt; EOF | sudo tee /etc/yum.repos.d/azure-cli.repo [azure-cli] name=Azure CLI baseurl=https://packages.microsoft.com/yumrepos/azure-cli enabled=1 gpgcheck=1 gpgkey=https://packages.microsoft.com/keys/microsoft.asc EOF Install Azure CLI sudo dnf install -y azure-cli sshuttle Prepare Azure Account for Azure OpenShift Log into the Azure CLI by running the following and then authorizing through your Web Browser az login Make sure you have enough Quota (change the location if you\u0026rsquo;re not using East US) az vm list-usage --location \u0026#34;East US\u0026#34; -o table See Addendum – Adding Quota to ARO account if you have less than 36 Quota left for Total Regional CPUs\nRegister resource providers az provider register -n Microsoft.RedHatOpenShift --wait az provider register -n Microsoft.Compute --wait az provider register -n Microsoft.Storage --wait az provider register -n Microsoft.Authorization --wait Get Red Hat pull secret Log into cloud.redhat.com Browse to https://cloud.redhat.com/openshift/install/azure/aro-provisioned click the Download pull secret button and remember where you saved it, you\u0026rsquo;ll reference it later. Deploy Azure OpenShift Variables and Resource Group Set some environment variables to use later, and create an Azure Resource Group.\nSet the following environment variables export AZR_RESOURCE_LOCATION=eastus export AZR_RESOURCE_GROUP=openshift-private export AZR_CLUSTER=private-cluster export AZR_PULL_SECRET=~/Downloads/pull-secret.txt export NETWORK_SUBNET=10.0.0.0/20 export CONTROL_SUBNET=10.0.0.0/24 export MACHINE_SUBNET=10.0.1.0/24 export FIREWALL_SUBNET=10.0.2.0/24 export JUMPHOST_SUBNET=10.0.3.0/24 Create an Azure resource group az group create \\ --name $AZR_RESOURCE_GROUP \\ --location $AZR_RESOURCE_LOCATION Create an Azure Service Principal AZ_SUB_ID=$(az account show --query id -o tsv) AZ_SP_PASS=$(az ad sp create-for-rbac -n \u0026#34;${AZR_CLUSTER}-SP\u0026#34; --role contributor \\ --scopes \u0026#34;/subscriptions/${AZ_SUB_ID}/resourceGroups/${AZR_RESOURCE_GROUP}\u0026#34; \\ --query \u0026#34;password\u0026#34; -o tsv) AZ_SP_ID=$(az ad sp list --display-name \u0026#34;${AZR_CLUSTER}-SP\u0026#34; --query \u0026#34;[0].appId\u0026#34; -o tsv) Networking Create a virtual network with two empty subnets\nCreate virtual network az network vnet create \\ --address-prefixes $NETWORK_SUBNET \\ --name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --resource-group $AZR_RESOURCE_GROUP Create control plane subnet az network vnet subnet create \\ --resource-group $AZR_RESOURCE_GROUP \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$AZR_CLUSTER-aro-control-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $CONTROL_SUBNET Create machine subnet az network vnet subnet create \\ --resource-group $AZR_RESOURCE_GROUP \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$AZR_CLUSTER-aro-machine-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $MACHINE_SUBNET az network vnet subnet update \\ --name \u0026#34;$AZR_CLUSTER-aro-control-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --resource-group $AZR_RESOURCE_GROUP \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --disable-private-link-service-network-policies true Egress Public and Private clusters will have --outbound-type defined to LoadBalancer by default. It means all clusters by default have open egress to the internet through the public load balancer.\nTo change the default behavior and restrict the Internet Egress you have to set --outbound-type to UserDefinedRouting during cluster creation, and set up traffic to run through a Firewall solution.\n1a. NAT Gateway You can skip this step if you don\u0026rsquo;t need to restrict egress.\nCreate a Public IP az network public-ip create -g $AZR_RESOURCE_GROUP \\ -n $AZR_CLUSTER-natgw-ip \\ --sku \u0026#34;Standard\u0026#34; --location $AZR_RESOURCE_LOCATION Create the NAT Gateway az network nat gateway create \\ --resource-group ${AZR_RESOURCE_GROUP} \\ --name \u0026#34;${AZR_CLUSTER}-natgw\u0026#34; \\ --location ${AZR_RESOURCE_LOCATION} \\ --public-ip-addresses \u0026#34;${AZR_CLUSTER}-natgw-ip\u0026#34; Get the Public IP of the NAT Gateway GW_PUBLIC_IP=$(az network public-ip show -g ${AZR_RESOURCE_GROUP} \\ -n \u0026#34;${AZR_CLUSTER}-natgw-ip\u0026#34; --query \u0026#34;ipAddress\u0026#34; -o tsv) echo $GW_PUBLIC_IP Reconfigure Subnets to use Nat GW az network vnet subnet update \\ --name \u0026#34;${AZR_CLUSTER}-aro-control-subnet-${AZR_RESOURCE_LOCATION}\u0026#34; \\ --resource-group ${AZR_RESOURCE_GROUP} \\ --vnet-name \u0026#34;${AZR_CLUSTER}-aro-vnet-${AZR_RESOURCE_LOCATION}\u0026#34; \\ --nat-gateway \u0026#34;${AZR_CLUSTER}-natgw\u0026#34; az network vnet subnet update \\ --name \u0026#34;${AZR_CLUSTER}-aro-machine-subnet-${AZR_RESOURCE_LOCATION}\u0026#34; \\ --resource-group ${AZR_RESOURCE_GROUP} \\ --vnet-name \u0026#34;${AZR_CLUSTER}-aro-vnet-${AZR_RESOURCE_LOCATION}\u0026#34; \\ --nat-gateway \u0026#34;${AZR_CLUSTER}-natgw\u0026#34; 1b. Firewall + Internet Egress You can skip this step if you don\u0026rsquo;t need to restrict egress.\nMake sure you have the AZ CLI firewall extensions az extension add -n azure-firewall az extension update -n azure-firewall Create a firewall network, IP, and firewall az network vnet subnet create \\ -g $AZR_RESOURCE_GROUP \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ -n \u0026#34;AzureFirewallSubnet\u0026#34; \\ --address-prefixes $FIREWALL_SUBNET az network public-ip create -g $AZR_RESOURCE_GROUP -n fw-ip \\ --sku \u0026#34;Standard\u0026#34; --location $AZR_RESOURCE_LOCATION az network firewall create -g $AZR_RESOURCE_GROUP \\ -n aro-private -l $AZR_RESOURCE_LOCATION Configure the firewall and configure IP Config (this may take 15 minutes) az network firewall ip-config create -g $AZR_RESOURCE_GROUP \\ -f aro-private -n fw-config --public-ip-address fw-ip \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; FWPUBLIC_IP=$(az network public-ip show -g $AZR_RESOURCE_GROUP -n fw-ip --query \u0026#34;ipAddress\u0026#34; -o tsv) FWPRIVATE_IP=$(az network firewall show -g $AZR_RESOURCE_GROUP -n aro-private --query \u0026#34;ipConfigurations[0].privateIPAddress\u0026#34; -o tsv) echo $FWPUBLIC_IP echo $FWPRIVATE_IP Create and configure a route table az network route-table create -g $AZR_RESOURCE_GROUP --name aro-udr sleep 10 az network route-table route create -g $AZR_RESOURCE_GROUP --name aro-udr \\ --route-table-name aro-udr --address-prefix 0.0.0.0/0 \\ --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP az network route-table route create -g $AZR_RESOURCE_GROUP --name aro-vnet \\ --route-table-name aro-udr --address-prefix 10.0.0.0/16 --name local-route \\ --next-hop-type VirtualNetworkGateway Create firewall rules for ARO resources az network firewall network-rule create -g $AZR_RESOURCE_GROUP -f aro-private \\ --collection-name \u0026#39;allow-https\u0026#39; --name allow-all \\ --action allow --priority 100 \\ --source-addresses \u0026#39;*\u0026#39; --dest-addr \u0026#39;*\u0026#39; \\ --protocols \u0026#39;Any\u0026#39; --destination-ports 1-65535 Update the subnets to use the Firewall az network vnet subnet update -g $AZR_RESOURCE_GROUP \\ --vnet-name $AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION \\ --name \u0026#34;$AZR_CLUSTER-aro-control-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --route-table aro-udr az network vnet subnet update -g $AZR_RESOURCE_GROUP \\ --vnet-name $AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION \\ --name \u0026#34;$AZR_CLUSTER-aro-machine-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --route-table aro-udr Create the cluster This will take between 30 and 45 minutes.\naz aro create \\ --resource-group $AZR_RESOURCE_GROUP \\ --name $AZR_CLUSTER \\ --vnet \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --master-subnet \u0026#34;$AZR_CLUSTER-aro-control-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --worker-subnet \u0026#34;$AZR_CLUSTER-aro-machine-subnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --apiserver-visibility Private \\ --ingress-visibility Private \\ --pull-secret @$AZR_PULL_SECRET \\ --client-id \u0026#34;${AZ_SP_ID}\u0026#34; \\ --client-secret \u0026#34;${AZ_SP_PASS}\u0026#34; Be sure to add the --outbound-type UserDefinedRouting flag if you are not using the default routing.\nJump Host With the cluster in a private network, we can create a Jump host in order to connect to it. You can do this while the cluster is being created.\nCreate jump subnet az network vnet subnet create \\ --resource-group $AZR_RESOURCE_GROUP \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; \\ --name JumpSubnet \\ --address-prefixes $JUMPHOST_SUBNET \\ --service-endpoints Microsoft.ContainerRegistry Create a jump host az vm create --name jumphost \\ --resource-group $AZR_RESOURCE_GROUP \\ --ssh-key-values $HOME/.ssh/id_rsa.pub \\ --admin-username aro \\ --image \u0026#34;RedHat:RHEL:9_1:9.1.2022112113\u0026#34; \\ --subnet JumpSubnet \\ --public-ip-address jumphost-ip \\ --public-ip-sku Standard \\ --vnet-name \u0026#34;$AZR_CLUSTER-aro-vnet-$AZR_RESOURCE_LOCATION\u0026#34; Save the jump host public IP address JUMP_IP=$(az vm list-ip-addresses -g $AZR_RESOURCE_GROUP -n jumphost -o tsv \\ --query \u0026#39;[].virtualMachine.network.publicIpAddresses[0].ipAddress\u0026#39;) echo $JUMP_IP Use sshuttle to create a ssh vpn via the jump host (use a separate terminal session) sshuttle --dns -NHr \u0026#34;aro@${JUMP_IP}\u0026#34; $NETWORK_SUBNET Get OpenShift console URL APISERVER=$(az aro show \\ --name $AZR_CLUSTER \\ --resource-group $AZR_RESOURCE_GROUP \\ -o tsv --query apiserverProfile.url) echo $APISERVER Get OpenShift credentials ADMINPW=$(az aro list-credentials \\ --name $AZR_CLUSTER \\ --resource-group $AZR_RESOURCE_GROUP \\ --query kubeadminPassword \\ -o tsv) Log into OpenShift oc login $APISERVER --username kubeadmin --password ${ADMINPW} Delete Cluster Once you\u0026rsquo;re done its a good idea to delete the cluster to ensure that you don\u0026rsquo;t get a surprise bill.\nDelete the cluster az aro delete -y \\ --resource-group $AZR_RESOURCE_GROUP \\ --name $AZR_CLUSTER Delete the Azure resource group Only do this if there\u0026rsquo;s nothing else in the resource group.\naz group delete -y \\ --name $AZR_RESOURCE_GROUP Addendum Adding Quota to ARO account Visit My Quotas in the Azure Console\nChoose the appropriate filters:\nSet Provider to \u0026ldquo;Compute\u0026rdquo; Set Subscription to the subscription you are creating the cluster in Set Region to \u0026ldquo;East US\u0026rdquo; and uncheck the other region boxes Search for the quota name that you want to increase.\nNext to the quota name you wish to increase, click the pencil in the Adjustable column to request adjustment\nEnter the new desired quota in the New limit text box. By default, a cluster will need 36 additional Regional vCPUs beyond current usage.\nClick Submit. You may need to go through additional authentication.\nAzure will review your request to adjust your quota. This may take several minutes.\n","permalink":"https://rmmartins.com/2025/01/21/private-aro-cluster-with-access-via-jumphost/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/private-cluster/\"\u003ehttps://cloud.redhat.com/experts/aro/private-cluster/\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eA Quickstart guide to deploying a Private Azure Red Hat OpenShift cluster.\u003c/p\u003e\n\u003ch2 id=\"prerequisites\"\u003ePrerequisites\u003c/h2\u003e\n\u003ch3 id=\"azure-cli\"\u003eAzure CLI\u003c/h3\u003e\n\u003cp\u003e\u003cem\u003eObviously you\u0026rsquo;ll need to have an Azure account to configure the CLI against.\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eMacOS\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eSee \u003ca href=\"https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-macos\"\u003eAzure Docs\u003c/a\u003e for alternative install options.\u003c/em\u003e\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eInstall Azure CLI using homebrew\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ebrew update \u003cspan class=\"o\"\u003e\u0026amp;\u0026amp;\u003c/span\u003e brew install azure-cli\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eInstall sshuttle using homebrew\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ebrew install sshuttle\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003e\u003cstrong\u003eLinux\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eSee \u003ca href=\"https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=dnf\"\u003eAzure Docs\u003c/a\u003e for alternative install options.\u003c/em\u003e\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eImport the Microsoft Keys\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esudo rpm --import https://packages.microsoft.com/keys/microsoft.asc\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eAdd the Microsoft Yum Repository\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003ecat \u003cspan class=\"s\"\u003e\u0026lt;\u0026lt; EOF | sudo tee /etc/yum.repos.d/azure-cli.repo\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003e[azure-cli]\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003ename=Azure CLI\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003ebaseurl=https://packages.microsoft.com/yumrepos/azure-cli\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003eenabled=1\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003egpgcheck=1\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003egpgkey=https://packages.microsoft.com/keys/microsoft.asc\n\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003e\u003cspan class=\"s\"\u003eEOF\u003c/span\u003e\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"3\"\u003e\n\u003cli\u003eInstall Azure CLI\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003esudo dnf install -y azure-cli sshuttle\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003ch3 id=\"prepare-azure-account-for-azure-openshift\"\u003ePrepare Azure Account for Azure OpenShift\u003c/h3\u003e\n\u003col\u003e\n\u003cli\u003eLog into the Azure CLI by running the following and then authorizing through your Web Browser\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eaz login\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eMake sure you have enough Quota (change the location if you\u0026rsquo;re not using East US)\u003c/li\u003e\n\u003c/ol\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" class=\"chroma\"\u003e\u003ccode class=\"language-bash\" data-lang=\"bash\"\u003e\u003cspan class=\"line\"\u003e\u003cspan class=\"cl\"\u003eaz vm list-usage --location \u003cspan class=\"s2\"\u003e\u0026#34;East US\u0026#34;\u003c/span\u003e -o table\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eSee \u003ca href=\"#adding-quota-to-aro-account\"\u003eAddendum – Adding Quota to ARO account\u003c/a\u003e if you have less than 36 Quota left for Total Regional CPUs\u003c/p\u003e","title":"Private ARO Cluster with Access via JumpHost"},{"content":"When working with development or test environments in Azure, a common need is secure access to internal resources without exposing them directly to the internet. While VPN solutions are a robust way to achieve this, they can often be overkill for simple use cases, especially when you just want to access a few VMs or services for testing. A jump host combined with sshuttle offers a simple, VPN-like solution that can be quickly deployed and used to tunnel traffic to your Azure resources—without the overhead of setting up a full VPN.\nThis guide will walk you through creating a jump host in Azure, automatically generating a new SSH key pair during the VM creation process, and using sshuttle to securely connect to your internal Azure resources.\nWhy Use a Jump Host? A jump host (or bastion host) serves as a gateway into your Azure Virtual Network (VNet) and allows secure access to resources within the network. It\u0026rsquo;s especially useful for developers and IT administrators who need to troubleshoot, test, or access Azure VMs without exposing internal services to the public internet. With the help of sshuttle, you can securely tunnel traffic through the jump host to other VMs and services in the network—acting like a lightweight VPN without complex configuration.\nPrerequisites Make sure you have the following before starting:\nAn Azure subscription. Azure CLI installed and configured on your local machine. Basic knowledge of SSH and Azure networking. Step 1: Create the Jump Host VM in Azure You can quickly create a jump host VM in Azure using the Azure CLI. Here, we\u0026rsquo;ll leverage the --generate-ssh-keys flag, which automatically creates a new SSH key pair if none exists in the default ~/.ssh directory. This eliminates the need for manual SSH key generation, making the setup even easier.\nRun the following command:\naz vm create --name jumphost \\ --resource-group rgname \\ --generate-ssh-keys \\ --admin-username user \\ --image \u0026#34;RedHat:RHEL:9_1:9.1.2022112113\u0026#34; \\ --subnet jumpsubnet \\ --public-ip-address jumphost-ip \\ --public-ip-sku Standard \\ --vnet-name jumpvnet Command Breakdown: --name: The name of the jump host VM. --resource-group: The Azure resource group where the VM will be deployed. --generate-ssh-keys: Automatically generates a new SSH key pair if one doesn\u0026rsquo;t exist. If there\u0026rsquo;s an existing key in ~/.ssh, it will be used instead. --admin-username: Sets the admin username for SSH connections. --image: Specifies the base image for the VM (RHEL 9.1 in this example). --subnet: The subnet in the VNet where the VM will be placed. --public-ip-address: Allocates a public IP address for the VM. --public-ip-sku: Sets the IP SKU to \u0026ldquo;Standard\u0026rdquo; for better availability. --vnet-name: The name of the VNet where the subnet is located. This command creates a jump host named jumphost in the specified resource group, with a public IP address for easy SSH access. The --generate-ssh-keys parameter will store the newly generated keys in your ~/.ssh directory:\nPrivate key: ~/.ssh/id_rsa Public key: ~/.ssh/id_rsa.pub If you want to specify a custom path for the SSH keys, use the --ssh-key-values parameter instead:\naz vm create --name jumphost \\ --resource-group rgname \\ --ssh-key-values ~/.ssh/my_new_key.pub \\ --admin-username user \\ --image \u0026#34;RedHat:RHEL:9_1:9.1.2022112113\u0026#34; \\ --subnet jumpsubnet \\ --public-ip-address jumphost-ip \\ --public-ip-sku Standard \\ --vnet-name jumpvnet Step 2: Install sshuttle Locally sshuttle is a powerful tool that creates a VPN-like experience using SSH tunneling. Install sshuttle on your local machine with the following commands:\nFor macOS: brew install sshuttle For Ubuntu: sudo apt-get update \u0026amp;\u0026amp; sudo apt-get install sshuttle For RHEL/CentOS: sudo yum install sshuttle Step 3: Set Up an SSH Tunnel Using sshuttle Once your jump host is up and running, you can use sshuttle to securely forward traffic to your Azure VNet. The following command will set up an SSH tunnel through the jump host, allowing your local machine to access the internal Azure subnet:\nsshuttle --dns -NHr \u0026#34;user@\u0026lt;jumphost-public-ip\u0026gt;\u0026#34; 10.0.1.0/24 \u0026amp; Important Note About Running in the Background (\u0026amp;): If sshuttle is running with elevated permissions (e.g., sudo), using \u0026amp; (which runs the command in the background) might break the password prompt, causing the command to fail. If you need sudo for sshuttle, consider one of the following options:\nRun the command without \u0026amp; first to enter the password, then press CTRL + C to stop the command. After that, run the same command again with \u0026amp;. Open a new terminal window and run sshuttle in the background, so you can manage the other terminal independently. Note: Replace \u0026lt;jumphost-public-ip\u0026gt; with the actual public IP address of the VM created in Step 1.\nStep 4: Verify the Tunnel and Connect to Internal Resources After setting up the SSH tunnel, you should be able to access internal resources in the Azure VNet as if you were connected directly. Test this by pinging an internal IP address or SSH-ing into another VM in the network:\nping 10.0.1.4 Or SSH directly into another VM:\nssh user@10.0.1.4 If you can successfully connect, your SSH tunnel is working, and you have secure access to your internal Azure VNet resources.\nWhy Use sshuttle? sshuttle acts as a lightweight VPN without all the complexities, creating a layer 3 VPN over SSH. It forwards TCP packets and DNS queries through your jump host, providing access to your entire Azure VNet securely and quickly.\nFinal Thoughts Setting up a jump host with sshuttle is an excellent solution for developers, testers, and administrators who want easy access to their Azure resources without the need for complex VPN solutions. With automatic SSH key generation and a few simple commands, you can create a secure gateway into your Azure environment and start accessing resources in minutes.\nGive this a try and let me know how it works for you! 😊🔧\n","permalink":"https://rmmartins.com/2024/10/04/creating-a-lightweight-jump-host-in-azure-with-sshuttle-no-vpn-required/","summary":"\u003cp\u003eWhen working with development or test environments in Azure, a common need is secure access to internal resources without exposing them directly to the internet. While VPN solutions are a robust way to achieve this, they can often be overkill for simple use cases, especially when you just want to access a few VMs or services for testing. A jump host combined with sshuttle offers a simple, VPN-like solution that can be quickly deployed and used to tunnel traffic to your Azure resources—without the overhead of setting up a full VPN.\u003c/p\u003e","title":"Creating a Lightweight Jump Host in Azure with sshuttle (No VPN Required)"},{"content":"This article was originally published at https://cloud.redhat.com/experts/aro/acm-odf-aro/\nA guide to deploying Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) for Azure Red Hat OpenShift (ARO) Disaster Recovery.\nOverview VolSync is not supported for ARO in ACM: https://access.redhat.com/articles/7006295 so if you run into issues and file a support ticket, you will receive the information that ARO is not supported.\nIn today\u0026rsquo;s fast-paced and data-driven world, ensuring the resilience and availability of your applications and data has never been more critical. The unexpected can happen at any moment, and the ability to recover quickly and efficiently is paramount. That\u0026rsquo;s where OpenShift Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) come into play. In this guide, we will explore the deployment of ACM and ODF for disaster recovery (DR) purposes, empowering you to safeguard your applications and data across multiple clusters.\nSample Architecture\nHub Cluster (East US Region):\nThis is the central control and management cluster of your multi-cluster environment. It hosts Red Hat Advanced Cluster Management (ACM), which is a powerful tool for managing and orchestrating multiple OpenShift clusters. Within the Hub Cluster, you have MultiClusterHub, which is a component of ACM that facilitates the management of multiple OpenShift clusters from a single control point. Additionally, you have OpenShift Data Foundation (ODF) Multicluster Orchestrator in the Hub Cluster. The Hub Cluster shares the same Virtual Network (VNET) with the Primary Cluster, but they use different subnets within that VNET. VNET peering is established between the Hub Cluster\u0026rsquo;s VNET and the Secondary Cluster\u0026rsquo;s dedicated VNET in the Central US region. Primary Cluster (East US Region):\nThis cluster serves as the primary application deployment cluster. It has the Submariner Add-On, which enables network connectivity and service discovery between clusters. ODF is also deployed in the Primary Cluster, providing storage and data services to applications running in this cluster. Secondary Cluster (Central US Region):\nThis cluster functions as a secondary or backup cluster for disaster recovery (DR) purposes. Similar to the Primary Cluster, it has the Submariner Add-On to establish network connectivity. ODF is deployed here as well, ensuring that data can be replicated and managed across clusters. The Secondary Cluster resides in its own dedicated VNET in the Central US region. Prerequisites Azure CLI SShuttle to create a SSH VPN (or create an Azure VPN) oc cli Azure Account Log into the Azure CLI az login Make sure you have enough Quota az vm list-usage --location \u0026#34;East US\u0026#34; -o table Register resource providers az provider register -n Microsoft.RedHatOpenShift --wait az provider register -n Microsoft.Compute --wait az provider register -n Microsoft.Storage --wait az provider register -n Microsoft.Authorization --wait Red Hat pull secret Log into https://cloud.redhat.com Browse to https://cloud.redhat.com/openshift/install/azure/aro-provisioned Click the Download pull secret button. Manage Multiple Logins rm -rf /var/tmp/acm-odf-aro-kubeconfig touch /var/tmp/acm-odf-aro-kubeconfig export KUBECONFIG=/var/tmp/acm-odf-aro-kubeconfig Create clusters Set environment variables export AZR_PULL_SECRET=~/Downloads/pull-secret.txt export EAST_RESOURCE_LOCATION=eastus export EAST_RESOURCE_GROUP=rg-eastus export CENTRAL_RESOURCE_LOCATION=centralus export CENTRAL_RESOURCE_GROUP=rg-centralus Create environment variables for hub cluster export HUB_VIRTUAL_NETWORK=10.0.0.0/20 export HUB_CLUSTER=hub-cluster export HUB_CONTROL_SUBNET=10.0.0.0/24 export HUB_WORKER_SUBNET=10.0.1.0/24 export HUB_JUMPHOST_SUBNET=10.0.10.0/24 Set environment variables for primary cluster export PRIMARY_CLUSTER=primary-cluster export PRIMARY_CONTROL_SUBNET=10.0.2.0/24 export PRIMARY_WORKER_SUBNET=10.0.3.0/24 export PRIMARY_POD_CIDR=10.128.0.0/18 export PRIMARY_SERVICE_CIDR=172.30.0.0/18 Set environment variables for secondary cluster Note: Pod and Service CIDRs CANNOT overlap between primary and secondary clusters (because we are using Submariner).\nexport SECONDARY_CLUSTER=secondary-cluster export SECONDARY_VIRTUAL_NETWORK=192.168.0.0/20 export SECONDARY_CONTROL_SUBNET=192.168.0.0/24 export SECONDARY_WORKER_SUBNET=192.168.1.0/24 export SECONDARY_JUMPHOST_SUBNET=192.168.10.0/24 export SECONDARY_POD_CIDR=10.130.0.0/18 export SECONDARY_SERVICE_CIDR=172.30.128.0/18 Deploying the Hub Cluster Create an Azure resource group az group create \\ --name $EAST_RESOURCE_GROUP \\ --location $EAST_RESOURCE_LOCATION Create virtual network az network vnet create \\ --address-prefixes $HUB_VIRTUAL_NETWORK \\ --name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --resource-group $EAST_RESOURCE_GROUP Create control plane subnet az network vnet subnet create \\ --resource-group $EAST_RESOURCE_GROUP \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$HUB_CLUSTER-aro-control-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $HUB_CONTROL_SUBNET Create worker subnet az network vnet subnet create \\ --resource-group $EAST_RESOURCE_GROUP \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$HUB_CLUSTER-aro-worker-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $HUB_WORKER_SUBNET Create the cluster (30-45 minutes) az aro create \\ --resource-group $EAST_RESOURCE_GROUP \\ --name $HUB_CLUSTER \\ --vnet \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --master-subnet \u0026#34;$HUB_CLUSTER-aro-control-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --worker-subnet \u0026#34;$HUB_CLUSTER-aro-worker-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --version 4.12.25 \\ --apiserver-visibility Private \\ --ingress-visibility Private \\ --pull-secret @$AZR_PULL_SECRET Deploying the Primary cluster Create control plane subnet az network vnet subnet create \\ --resource-group $EAST_RESOURCE_GROUP \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$PRIMARY_CLUSTER-aro-control-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $PRIMARY_CONTROL_SUBNET Create worker subnet az network vnet subnet create \\ --resource-group $EAST_RESOURCE_GROUP \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$PRIMARY_CLUSTER-aro-worker-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $PRIMARY_WORKER_SUBNET Create the cluster (30-45 minutes) az aro create \\ --resource-group $EAST_RESOURCE_GROUP \\ --name $PRIMARY_CLUSTER \\ --vnet \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --master-subnet \u0026#34;$PRIMARY_CLUSTER-aro-control-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --worker-subnet \u0026#34;$PRIMARY_CLUSTER-aro-worker-subnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --version 4.12.25 \\ --apiserver-visibility Private \\ --ingress-visibility Private \\ --pull-secret @$AZR_PULL_SECRET \\ --pod-cidr $PRIMARY_POD_CIDR \\ --service-cidr $PRIMARY_SERVICE_CIDR Connect to Hub and Primary Clusters Create the jump subnet az network vnet subnet create \\ --resource-group $EAST_RESOURCE_GROUP \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; \\ --name jump-subnet \\ --address-prefixes $HUB_JUMPHOST_SUBNET Create a jump host az vm create --name jumphost \\ --resource-group $EAST_RESOURCE_GROUP \\ --ssh-key-values $HOME/.ssh/id_rsa.pub \\ --admin-username aro \\ --image \u0026#34;RedHat:RHEL:9_1:9.1.2022112113\u0026#34; \\ --subnet jump-subnet \\ --public-ip-address jumphost-ip \\ --public-ip-sku Standard \\ --vnet-name \u0026#34;$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION\u0026#34; Save the jump host public IP address EAST_JUMP_IP=$(az vm list-ip-addresses -g $EAST_RESOURCE_GROUP -n jumphost -o tsv \\ --query \u0026#39;[].virtualMachine.network.publicIpAddresses[0].ipAddress\u0026#39;) echo $EAST_JUMP_IP Use sshuttle to create a SSH VPN via the jump host (use a separate terminal session) sshuttle --dns -NHr \u0026#34;aro@${EAST_JUMP_IP}\u0026#34; $HUB_VIRTUAL_NETWORK Get OpenShift API routes HUB_APISERVER=$(az aro show \\ --name $HUB_CLUSTER \\ --resource-group $EAST_RESOURCE_GROUP \\ -o tsv --query apiserverProfile.url) PRIMARY_APISERVER=$(az aro show \\ --name $PRIMARY_CLUSTER \\ --resource-group $EAST_RESOURCE_GROUP \\ -o tsv --query apiserverProfile.url) Get OpenShift credentials HUB_ADMINPW=$(az aro list-credentials \\ --name $HUB_CLUSTER \\ --resource-group $EAST_RESOURCE_GROUP \\ --query kubeadminPassword \\ -o tsv) PRIMARY_ADMINPW=$(az aro list-credentials \\ --name $PRIMARY_CLUSTER \\ --resource-group $EAST_RESOURCE_GROUP \\ --query kubeadminPassword \\ -o tsv) Log into Hub and configure context oc login $HUB_APISERVER --username kubeadmin --password ${HUB_ADMINPW} oc config rename-context $(oc config current-context) hub oc config use hub Log into Primary and configure context oc login $PRIMARY_APISERVER --username kubeadmin --password ${PRIMARY_ADMINPW} oc config rename-context $(oc config current-context) primary oc config use primary Deploying the Secondary Cluster Create an Azure resource group az group create \\ --name $CENTRAL_RESOURCE_GROUP \\ --location $CENTRAL_RESOURCE_LOCATION Create virtual network az network vnet create \\ --address-prefixes $SECONDARY_VIRTUAL_NETWORK \\ --name \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --resource-group $CENTRAL_RESOURCE_GROUP Create subnets and cluster (30-45 minutes) az network vnet subnet create \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --vnet-name \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$SECONDARY_CLUSTER-aro-control-subnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $SECONDARY_CONTROL_SUBNET az network vnet subnet create \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --vnet-name \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --name \u0026#34;$SECONDARY_CLUSTER-aro-worker-subnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --address-prefixes $SECONDARY_WORKER_SUBNET az aro create \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --name $SECONDARY_CLUSTER \\ --vnet \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --master-subnet \u0026#34;$SECONDARY_CLUSTER-aro-control-subnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --worker-subnet \u0026#34;$SECONDARY_CLUSTER-aro-worker-subnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --version 4.12.25 \\ --apiserver-visibility Private \\ --ingress-visibility Private \\ --pull-secret @$AZR_PULL_SECRET \\ --pod-cidr $SECONDARY_POD_CIDR \\ --service-cidr $SECONDARY_SERVICE_CIDR VNet Peering Create a peering between both VNETs (Hub Cluster in EastUS and Secondary Cluster in Central US)\nexport RG_EASTUS=$EAST_RESOURCE_GROUP export RG_CENTRALUS=$CENTRAL_RESOURCE_GROUP export VNET_EASTUS=$HUB_CLUSTER-aro-vnet-$EAST_RESOURCE_LOCATION export VNET_CENTRALUS=$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION VNET_EASTUS_ID=$(az network vnet show --resource-group $RG_EASTUS --name $VNET_EASTUS --query id --out tsv) VNET_CENTRALUS_ID=$(az network vnet show --resource-group $RG_CENTRALUS --name $VNET_CENTRALUS --query id --out tsv) az network vnet peering create --name \u0026#34;Link\u0026#34;-$VNET_EASTUS-\u0026#34;To\u0026#34;-$VNET_CENTRALUS \\ --resource-group $RG_EASTUS \\ --vnet-name $VNET_EASTUS \\ --remote-vnet $VNET_CENTRALUS_ID \\ --allow-vnet-access=True \\ --allow-forwarded-traffic=True \\ --allow-gateway-transit=True az network vnet peering create --name \u0026#34;Link\u0026#34;-$VNET_CENTRALUS-\u0026#34;To\u0026#34;-$VNET_EASTUS \\ --resource-group $RG_CENTRALUS \\ --vnet-name $VNET_CENTRALUS \\ --remote-vnet $VNET_EASTUS_ID \\ --allow-vnet-access \\ --allow-forwarded-traffic=True \\ --allow-gateway-transit=True Connect to Secondary cluster Create the jump subnet and host az network vnet subnet create \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --vnet-name \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; \\ --name jump-subnet \\ --address-prefixes $SECONDARY_JUMPHOST_SUBNET az vm create --name jumphost \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --ssh-key-values $HOME/.ssh/id_rsa.pub \\ --admin-username aro \\ --image \u0026#34;RedHat:RHEL:9_1:9.1.2022112113\u0026#34; \\ --subnet jump-subnet \\ --public-ip-address jumphost-ip \\ --public-ip-sku Standard \\ --vnet-name \u0026#34;$SECONDARY_CLUSTER-aro-vnet-$CENTRAL_RESOURCE_LOCATION\u0026#34; Connect via sshuttle (in a separate terminal) CENTRAL_JUMP_IP=$(az vm list-ip-addresses -g $CENTRAL_RESOURCE_GROUP -n jumphost -o tsv \\ --query \u0026#39;[].virtualMachine.network.publicIpAddresses[0].ipAddress\u0026#39;) sshuttle --dns -NHr \u0026#34;aro@${CENTRAL_JUMP_IP}\u0026#34; $SECONDARY_VIRTUAL_NETWORK Log into Secondary and configure context SECONDARY_APISERVER=$(az aro show \\ --name $SECONDARY_CLUSTER \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ -o tsv --query apiserverProfile.url) SECONDARY_ADMINPW=$(az aro list-credentials \\ --name $SECONDARY_CLUSTER \\ --resource-group $CENTRAL_RESOURCE_GROUP \\ --query kubeadminPassword \\ -o tsv) oc login $SECONDARY_APISERVER --username kubeadmin --password ${SECONDARY_ADMINPW} oc config rename-context $(oc config current-context) secondary oc config use secondary Setup Hub Cluster oc config use hub Configure ACM Create ACM namespace cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: open-cluster-management labels: openshift.io/cluster-monitoring: \u0026#34;true\u0026#34; EOF Create ACM Operator Group cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: open-cluster-management namespace: open-cluster-management spec: targetNamespaces: - open-cluster-management EOF Install ACM version 2.8 cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: advanced-cluster-management namespace: open-cluster-management spec: channel: release-2.8 installPlanApproval: Automatic name: advanced-cluster-management source: redhat-operators sourceNamespace: openshift-marketplace EOF Check if installation succeeded oc wait --for=jsonpath=\u0026#39;{.status.phase}\u0026#39;=\u0026#39;Succeeded\u0026#39; csv -n open-cluster-management \\ -l operators.coreos.com/advanced-cluster-management.open-cluster-management=\u0026#39;\u0026#39; Install MultiClusterHub instance cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: operator.open-cluster-management.io/v1 kind: MultiClusterHub metadata: namespace: open-cluster-management name: multiclusterhub spec: {} EOF Check that the MultiClusterHub is running oc wait --for=jsonpath=\u0026#39;{.status.phase}\u0026#39;=\u0026#39;Running\u0026#39; multiclusterhub multiclusterhub -n open-cluster-management \\ --timeout=600s Configure ODF Multicluster Orchestrator Install the ODF Multicluster Orchestrator version 4.12 cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/odf-multicluster-orchestrator.openshift-operators: \u0026#34;\u0026#34; name: odf-multicluster-orchestrator namespace: openshift-operators spec: channel: stable-4.12 installPlanApproval: Automatic name: odf-multicluster-orchestrator source: redhat-operators sourceNamespace: openshift-marketplace EOF Check if installation succeeded oc wait --for=jsonpath=\u0026#39;{.status.phase}\u0026#39;=\u0026#39;Succeeded\u0026#39; csv -n openshift-operators \\ -l operators.coreos.com/odf-multicluster-orchestrator.openshift-operators=\u0026#39;\u0026#39; Import Clusters into ACM Create a Managed Cluster Set oc config use hub export MANAGED_CLUSTER_SET_NAME=aro-clusters cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: $MANAGED_CLUSTER_SET_NAME EOF Retrieve token and server from primary cluster oc config use primary PRIMARY_API=$(oc whoami --show-server) PRIMARY_TOKEN=$(oc whoami -t) Retrieve token and server from secondary cluster oc config use secondary SECONDARY_API=$(oc whoami --show-server) SECONDARY_TOKEN=$(oc whoami -t) Import Primary Cluster oc config use hub cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: $PRIMARY_CLUSTER labels: cluster.open-cluster-management.io/clusterset: $MANAGED_CLUSTER_SET_NAME cloud: auto-detect vendor: auto-detect spec: hubAcceptsClient: true EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: auto-import-secret namespace: $PRIMARY_CLUSTER stringData: autoImportRetry: \u0026#34;2\u0026#34; token: \u0026#34;${PRIMARY_TOKEN}\u0026#34; server: \u0026#34;${PRIMARY_API}\u0026#34; type: Opaque EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: $PRIMARY_CLUSTER namespace: $PRIMARY_CLUSTER spec: clusterName: $PRIMARY_CLUSTER clusterNamespace: $PRIMARY_CLUSTER clusterLabels: cloud: auto-detect vendor: auto-detect cluster.open-cluster-management.io/clusterset: $MANAGED_CLUSTER_SET_NAME applicationManager: enabled: true policyController: enabled: true searchCollector: enabled: true certPolicyController: enabled: true iamPolicyController: enabled: true EOF oc get managedclusters Import Secondary Cluster cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: $SECONDARY_CLUSTER labels: cluster.open-cluster-management.io/clusterset: $MANAGED_CLUSTER_SET_NAME cloud: auto-detect vendor: auto-detect spec: hubAcceptsClient: true EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: auto-import-secret namespace: $SECONDARY_CLUSTER stringData: autoImportRetry: \u0026#34;2\u0026#34; token: \u0026#34;${SECONDARY_TOKEN}\u0026#34; server: \u0026#34;${SECONDARY_API}\u0026#34; type: Opaque EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: $SECONDARY_CLUSTER namespace: $SECONDARY_CLUSTER spec: clusterName: $SECONDARY_CLUSTER clusterNamespace: $SECONDARY_CLUSTER clusterLabels: cloud: auto-detect vendor: auto-detect cluster.open-cluster-management.io/clusterset: $MANAGED_CLUSTER_SET_NAME applicationManager: enabled: true policyController: enabled: true searchCollector: enabled: true certPolicyController: enabled: true iamPolicyController: enabled: true EOF oc get managedclusters Configure Submariner Add-On Create Broker configuration cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker namespace: $MANAGED_CLUSTER_SET_NAME-broker labels: cluster.open-cluster-management.io/backup: submariner spec: globalnetEnabled: false EOF Deploy Submariner to Primary cluster cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: $PRIMARY_CLUSTER spec: IPSecNATTPort: 4500 NATTEnable: true cableDriver: libreswan loadBalancerEnable: true gatewayConfig: gateways: 1 EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: submariner namespace: $PRIMARY_CLUSTER spec: installNamespace: submariner-operator EOF Deploy Submariner to Secondary cluster cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: $SECONDARY_CLUSTER spec: IPSecNATTPort: 4500 NATTEnable: true cableDriver: libreswan loadBalancerEnable: true gatewayConfig: gateways: 1 EOF cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: submariner namespace: $SECONDARY_CLUSTER spec: installNamespace: submariner-operator EOF Check connection status oc -n $PRIMARY_CLUSTER get managedclusteraddons submariner -o yaml Look for the connection established status:\nmessage: The connection between clusters \u0026#34;primary-cluster\u0026#34; and \u0026#34;secondary-cluster\u0026#34; is established reason: ConnectionsEstablished status: \u0026#34;False\u0026#34; type: SubmarinerConnectionDegraded Install ODF Primary Cluster\noc config use primary Follow these steps to deploy ODF: https://cloud.redhat.com/experts/aro/odf/\nSecondary Cluster\noc config use secondary Follow these steps to deploy ODF: https://cloud.redhat.com/experts/aro/odf/\nFinishing the setup of the disaster recovery solution Creating Disaster Recovery Policy on Hub cluster oc config use hub cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPolicy metadata: name: drpolicy spec: drClusters: - primary-cluster - secondary-cluster schedulingInterval: 5m EOF Wait for DR policy to be validated (can take up to 10 minutes):\noc get drpolicy drpolicy -o yaml Creating the Application and Failover Create namespace and PlacementRule cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: busybox-sample EOF cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: busybox-placementrule namespace: busybox-sample spec: clusterSelector: matchLabels: name: primary-cluster schedulerName: ramen EOF Create application with ACM cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: busybox-sample namespace: busybox-sample spec: componentKinds: - group: apps.open-cluster-management.io kind: Subscription descriptor: {} selector: matchExpressions: - key: app operator: In values: - busybox-sample --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: annotations: apps.open-cluster-management.io/reconcile-rate: medium name: busybox-sample namespace: busybox-sample spec: type: Git pathname: \u0026#39;https://github.com/RamenDR/ocm-ramen-samples\u0026#39; --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/git-branch: main apps.open-cluster-management.io/git-path: busybox-odr apps.open-cluster-management.io/reconcile-option: merge labels: app: busybox-sample name: busybox-sample-subscription-1 namespace: busybox-sample spec: channel: busybox-sample/busybox-sample placement: placementRef: kind: PlacementRule name: busybox-placementrule EOF Associate the DR policy to the application cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: labels: cluster.open-cluster-management.io/backup: resource name: busybox-placementrule-drpc namespace: busybox-sample spec: drPolicyRef: name: drpolicy placementRef: kind: PlacementRule name: busybox-placementrule namespace: busybox-sample preferredCluster: $PRIMARY_CLUSTER pvcSelector: matchLabels: appname: busybox-sample EOF Failover sample application to secondary cluster cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: labels: cluster.open-cluster-management.io/backup: resource name: busybox-placementrule-drpc namespace: busybox-sample spec: action: Failover failoverCluster: $SECONDARY_CLUSTER drPolicyRef: name: drpolicy placementRef: kind: PlacementRule name: busybox-placementrule namespace: busybox-sample pvcSelector: matchLabels: appname: busybox-sample EOF Verify application runs in secondary cluster oc config use secondary oc get pods -n busybox-sample Cleanup az aro delete -y \\ --resource-group rg-eastus \\ --name hub-cluster az aro delete -y \\ --resource-group rg-eastus \\ --name primary-cluster az group delete --name rg-eastus az aro delete -y \\ --resource-group rg-centralus \\ --name secondary-cluster az group delete --name rg-centralus Additional reference resources Virtual Network Peering Regional-DR solution for OpenShift Data Foundation Private ARO Cluster with access via JumpHost Deploy ACM Submariner for connect overlay networks ARO – ROSA clusters Configure ARO with OpenShift Data Foundation OpenShift Regional Disaster Recovery with Advanced Cluster Management ","permalink":"https://rmmartins.com/2024/10/04/deploying-advanced-cluster-management-and-openshift-data-foundation-for-aro-disaster-recovery/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/acm-odf-aro/\"\u003ehttps://cloud.redhat.com/experts/aro/acm-odf-aro/\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eA guide to deploying Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) for Azure Red Hat OpenShift (ARO) Disaster Recovery.\u003c/p\u003e\n\u003ch2 id=\"overview\"\u003eOverview\u003c/h2\u003e\n\u003cblockquote\u003e\n\u003cp\u003eVolSync is not supported for ARO in ACM: \u003ca href=\"https://access.redhat.com/articles/7006295\"\u003ehttps://access.redhat.com/articles/7006295\u003c/a\u003e so if you run into issues and file a support ticket, you will receive the information that ARO is not supported.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eIn today\u0026rsquo;s fast-paced and data-driven world, ensuring the resilience and availability of your applications and data has never been more critical. The unexpected can happen at any moment, and the ability to recover quickly and efficiently is paramount. That\u0026rsquo;s where OpenShift Advanced Cluster Management (ACM) and OpenShift Data Foundation (ODF) come into play. In this guide, we will explore the deployment of ACM and ODF for disaster recovery (DR) purposes, empowering you to safeguard your applications and data across multiple clusters.\u003c/p\u003e","title":"Deploying Advanced Cluster Management and OpenShift Data Foundation for ARO Disaster Recovery"},{"content":"This article was originally published at Configure ARO to use Microsoft Entra ID Group Claims | Red Hat Cloud Experts\nThis guide demonstrates how to utilize the OpenID Connect group claim functionality implemented in OpenShift 4.10. This functionality allows an identity provider to provide a user\u0026rsquo;s group membership for use within OpenShift. This guide will walk through the creation of an Azure Active Directory (Azure AD) application, configure the necessary Azure AD groups, and configure Azure Red Hat OpenShift (ARO) to authenticate and manage authorization using Azure AD.\nThis guide will walk through the following steps:\nRegister a new application in Azure AD for authentication. Configure the application registration in Azure AD to include optional and group claims in tokens. Configure the Azure Red Hat OpenShift (ARO) cluster to use Azure AD as the identity provider. Grant additional permissions to individual groups. Before you Begin Create a set of security groups and assign users by following the Microsoft documentation.\nIn addition, if you are using zsh as your shell (which is the default shell on macOS) you may need to run set -k to get the below commands to run without errors. This is because zsh disables comments in interactive shells from being used\nCapture the OAuth callback URL First, construct the cluster\u0026rsquo;s OAuth callback URL and make note of it. To do so, run the following command, making sure to replace the variables specified:\nThe \u0026ldquo;AAD\u0026rdquo; directory at the end of the the OAuth callback URL should match the OAuth identity provider name you\u0026rsquo;ll setup later.\nRESOURCE_GROUP=example-rg # Replace this with the name of your ARO cluster\u0026#39;s resource group CLUSTER_NAME=example-cluster # Replace this with the name of your ARO cluster echo \u0026#39;OAuth callback URL: \u0026#39;$(az aro show -g $RESOURCE_GROUP -n $CLUSTER_NAME --query consoleProfile.url -o tsv | sed \u0026#39;s/console-openshift-console/oauth-openshift/\u0026#39;)\u0026#39;oauth2callback/AAD\u0026#39; Register a new application in Azure AD Second, you need to create the Azure AD application itself. To do so, login to the Azure portal, and navigate to App registrations blade, then click on \u0026ldquo;New registration\u0026rdquo; to create a new application.\nProvide a name for the application, for example openshift-auth. Select \u0026ldquo;Web\u0026rdquo; from the Redirect URI dropdown and fill in the Redirect URI using the value of the OAuth callback URL you retrieved in the previous step. Once you fill in the necessary information, click \u0026ldquo;Register\u0026rdquo; to create the application.\nThen, click on the \u0026ldquo;Certificates \u0026amp; secrets\u0026rdquo; sub-blade and select \u0026ldquo;New client secret\u0026rdquo;. Fill in the details request and make note of the generated client secret value, as you\u0026rsquo;ll use it in a later step. You won\u0026rsquo;t be able to retrieve it again.\nThen, click on the \u0026ldquo;Overview\u0026rdquo; sub-blade and make note of the \u0026ldquo;Application (client) ID\u0026rdquo; and \u0026ldquo;Directory (tenant) ID\u0026rdquo;. You\u0026rsquo;ll need those values in a later step as well.\n2. Configure optional claims (for optional and group claims) In order to provide OpenShift with enough information about the user to create their account, we will configure Azure AD to provide two optional claims, specifically \u0026ldquo;email\u0026rdquo; and \u0026ldquo;preferred_username\u0026rdquo;, as well as a group claim when a user logs in. For more information on optional claims in Azure AD, see the Microsoft documentation.\nClick on the \u0026ldquo;Token configuration\u0026rdquo; sub-blade and select the \u0026ldquo;Add optional claim\u0026rdquo; button.\nSelect ID then check the \u0026ldquo;email\u0026rdquo; and \u0026ldquo;preferred_username\u0026rdquo; claims and click the \u0026ldquo;Add\u0026rdquo; button to configure them for your Azure AD application.\nWhen prompted, follow the prompt to enable the necessary Microsoft Graph permissions.\nNext, select the \u0026ldquo;Add groups claim\u0026rdquo; button.\nSelect the \u0026ldquo;Security groups\u0026rdquo; option and click the \u0026ldquo;Add\u0026rdquo; button to configure group claims for your Azure AD application.\nNote: In this example, we are providing all security groups a user is a member of via the group claim. In a real production environment, we highly recommend scoping the groups provided by the group claim to only those groups which are applicable to OpenShift.\nGrant the admin consent for the in the API Permission section\nFinally, we need to configure OpenShift to use Azure AD as its identity provider.\nTo do so, ensure you are logged in to the OpenShift command line interface (oc) by running the following command, making sure to replace the variables specified:\nRESOURCE_GROUP=example-rg # Replace this with the name of your ARO cluster\u0026#39;s resource group CLUSTER_NAME=example-cluster # Replace this with the name of your ARO cluster oc login \\ $(az aro show -g $RESOURCE_GROUP -n $CLUSTER_NAME --query apiserverProfile.url -o tsv) \\ -u $(az aro list-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME --query kubeadminUsername -o tsv) \\ -p $(az aro list-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME --query kubeadminPassword -o tsv) Next, create a secret that contains the client secret that you captured in step 2 above. To do so, run the following command, making sure to replace the variable specified:\nCLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # Replace this with the Client Secret oc create secret generic openid-client-secret --from-literal=clientSecret=${CLIENT_SECRET} -n openshift-config Next, generate the necessary YAML for the cluster\u0026rsquo;s OAuth provider to use Azure AD as its identity provider. To do so, run the following command, making sure to replace the variables specified:\nIDP_NAME=AAD # Replace this with the name you used in the OAuth callback URL APP_ID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy # Replace this with the Application (client) ID TENANT_ID=zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz # Replace this with the Directory (tenant) ID cat \u0026lt;\u0026lt; EOF \u0026gt; cluster-oauth-config.yaml apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - mappingMethod: claim name: ${IDP_NAME} openID: claims: email: - email groups: - groups name: - name preferredUsername: - preferred_username clientID: ${APP_ID} clientSecret: name: openid-client-secret extraScopes: - profile - openid issuer: https://login.microsoftonline.com/${TENANT_ID}/v2.0 type: OpenID EOF Feel free to further modify this output (which is saved in your current directory as cluster-oauth-config.yaml).\nFinally, apply the new configuration to the cluster\u0026rsquo;s OAuth provider by running the following command:\noc apply -f ./cluster-oauth-config.yaml Note: It is normal to receive an error that says an annotation is missing when you run oc apply for the first time. This can be safely ignored.\nOnce the cluster authentication operator reconciles your changes (generally within a few minutes), you will be able to login to the cluster using Azure AD. In addition, the cluster OAuth provider will automatically create or update the membership of groups the user is a member of (using the group ID). The provider does not automatically create RoleBindings and ClusterRoleBindings for the groups that are created, you are responsible for creating those via your own processes.\nIf you have a private cluster behind a firewall, you may get an error message when you try login into the web console using the AAD option. In this case you should open a firewall rule allowing access from the cluster to graph.microsoft.com.\nIf you are using Azure Firewall, you can run those commands to allow this access:\naz network firewall network-rule create -g $AZR_RESOURCE_GROUP -f aro-private \\ --collection-name \u0026#39;Allow_Microsoft_Graph\u0026#39; --action allow --priority 100 \\ -n \u0026#39;Microsoft_Graph\u0026#39; --source-address \u0026#39;*\u0026#39; --protocols \u0026#39;any\u0026#39; \\ --source-addresses \u0026#39;*\u0026#39; --destination-fqdns \u0026#39;graph.microsoft.com\u0026#39; \\ --destination-ports \u0026#39;443\u0026#39; Now you should be able to login choosing the AAD option:\nThen inform the user you would like to use:\n4. Grant additional permissions to individual groups Once you login, you will notice that you have very limited permissions. This is because, by default, OpenShift only grants you the ability to create new projects (namespaces) in the cluster. Other projects (namespaces) are restricted from view. The cluster OAuth provider does not automatically create RoleBindings and ClusterRoleBindings for the groups that are created, you are responsible for creating those via your own processes.\nOpenShift includes a significant number of pre-configured roles, including the cluster-admin role that grants full access and control over the cluster. To grant an automatically generated group access to the cluster-admin role, you must create a ClusterRoleBinding to the group ID.\nGROUP_ID=wwwwwwww-wwww-wwww-wwww-wwwwwwwwwwww # Replace with your Azure AD Group ID that you would like to have cluster admin permissions oc create clusterrolebinding cluster-admin-group \\ --clusterrole=cluster-admin \\ --group=$GROUP_ID Now, any user in the specified group will automatically be granted cluster-admin access.\nFor more information on how to use RBAC to define and apply permissions in OpenShift, see the OpenShift documentation.\n","permalink":"https://rmmartins.com/2024/10/03/configure-aro-to-use-microsoft-entra-id-group-claims/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/idp/group-claims/aro/\"\u003eConfigure ARO to use Microsoft Entra ID Group Claims | Red Hat Cloud Experts\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eThis guide demonstrates how to utilize the OpenID Connect group claim functionality implemented in OpenShift 4.10. This functionality allows an identity provider to provide a user\u0026rsquo;s group membership for use within OpenShift. This guide will walk through the creation of an Azure Active Directory (Azure AD) application, configure the necessary Azure AD groups, and configure Azure Red Hat OpenShift (ARO) to authenticate and manage authorization using Azure AD.\u003c/p\u003e","title":"Configure ARO to Use Microsoft Entra ID Group Claims"},{"content":"This article was originally published at ARO with Nvidia GPU Workloads | Red Hat Cloud Experts\nARO guide to running Nvidia GPU workloads.\nPrerequisites oc cli Helm jq, moreutils, and gettext package An ARO 4.14 cluster Note: If you need to install an ARO cluster, please read our ARO Terraform Install Guide. Please be sure if you\u0026rsquo;re installing or using an existing ARO cluster that it is 4.14.x or higher.\nNote: Please ensure your ARO cluster was created with a valid pull secret (to verify make sure you can see the Operator Hub in the cluster\u0026rsquo;s console). If not, you can follow these instructions.\nLinux:\nsudo dnf install jq moreutils gettext MacOS:\nbrew install jq moreutils gettext helm openshift-cli Helm Prerequisites If you plan to use Helm to deploy the GPU operator, you will need do the following:\nAdd the MOBB chart repository to your Helm helm repo add mobb https://rh-mobb.github.io/helm-charts/ Update your repositories helm repo update GPU Quota All GPU quotas in Azure are 0 by default. You will need to login to the azure portal and request GPU quota. There is a lot of competition for GPU workers, so you may have to provision an ARO cluster in a region where you can actually reserve GPU.\nARO supports the following GPU workers:\nNC4as T4 v3 NC6s v3 NC8as T4 v3 NC12s v3 NC16as T4 v3 NC24s v3 NC24rs v3 NC64as T4 v3 Please remember that when you request quota that Azure is per core. To request a single NC4as T4 v3 node, you will need to request quota in groups of 4. If you wish to request an NC16as T4 v3 you will need to request quota of 16.\nLogin to Azure Login to portal.azure.com, type \u0026ldquo;quotas\u0026rdquo; in search by, click on Compute and in the search box type \u0026ldquo;NCAsv3_T4\u0026rdquo;. Select the region your cluster is in (select checkbox) and then click Request quota increase and ask for quota (I chose 8 so I can build two demo clusters of NC4as T4s). The Helm chart we use below will request a single Standard_NC4as_T4_v3 machine.\nConfigure quota Log in to your ARO cluster Login to OpenShift – we\u0026rsquo;ll use the kubeadmin account here but you can login with your user account as long as you have cluster-admin. oc login \u0026lt;apiserver\u0026gt; -u kubeadmin -p \u0026lt;kubeadminpass\u0026gt; GPU Machine Set ARO still uses Kubernetes Machinsets to create a machine set. I\u0026rsquo;m going to export the first machine set in my cluster (az 1) and use that as a template to build a single GPU machine in southcentralus region 1.\nYou can create the machine set the easy way using Helm, or Manually. We recommend using the Helm chart method.\nOption 1 – Helm Create a new machine-set (replicas of 1), see the Chart\u0026rsquo;s values file for configuration options helm upgrade --install -n openshift-machine-api \\ gpu mobb/aro-gpu Switch to the proper namespace (project): oc project openshift-machine-api Wait for the new GPU nodes to be available watch oc -n openshift-machine-api get machines Skip past Option 2 – Manually to Install Nvidia GPU Operator Option 2 – Manually View existing machine sets MACHINESET=$(oc get machineset -n openshift-machine-api -o=jsonpath=\u0026#39;{.items[0]}\u0026#39; | jq -r \u0026#39;[.metadata.name] | @tsv\u0026#39;) Save a copy of example machine set oc get machineset -n openshift-machine-api $MACHINESET -o json \u0026gt; gpu_machineset.json Change the .metadata.name field to a new unique name jq \u0026#39;.metadata.name = \u0026#34;nvidia-worker-southcentralus1\u0026#34;\u0026#39; gpu_machineset.json| sponge gpu_machineset.json Ensure spec.replicas matches the desired replica count jq \u0026#39;.spec.replicas = 1\u0026#39; gpu_machineset.json| sponge gpu_machineset.json Change the matchLabels field jq \u0026#39;.spec.selector.matchLabels.\u0026#34;machine.openshift.io/cluster-api-machineset\u0026#34; = \u0026#34;nvidia-worker-southcentralus1\u0026#34;\u0026#39; gpu_machineset.json| sponge gpu_machineset.json Change the template metadata labels jq \u0026#39;.spec.template.metadata.labels.\u0026#34;machine.openshift.io/cluster-api-machineset\u0026#34; = \u0026#34;nvidia-worker-southcentralus1\u0026#34;\u0026#39; gpu_machineset.json| sponge gpu_machineset.json Change the vmSize to the desired GPU instance type jq \u0026#39;.spec.template.spec.providerSpec.value.vmSize = \u0026#34;Standard_NC4as_T4_v3\u0026#34;\u0026#39; gpu_machineset.json | sponge gpu_machineset.json Change the zone jq \u0026#39;.spec.template.spec.providerSpec.value.zone = \u0026#34;1\u0026#34;\u0026#39; gpu_machineset.json | sponge gpu_machineset.json Delete the .status section jq \u0026#39;del(.status)\u0026#39; gpu_machineset.json | sponge gpu_machineset.json Verify the other data in the yaml file. Create GPU machine set Create GPU Machine set oc create -f gpu_machineset.json Verify GPU machine set oc get machineset -n openshift-machine-api oc get machine -n openshift-machine-api Once the machines are provisioned (5-15 minutes), they will show as nodes:\noc get nodes Install Nvidia GPU Operator This will create the nvidia-gpu-operator namespace, set up the operator group and install the Nvidia GPU Operator.\nOption 1 – Helm Create namespaces oc create namespace openshift-nfd oc create namespace nvidia-gpu-operator Use the mobb/operatorhub chart to deploy the needed operators helm upgrade -n nvidia-gpu-operator nvidia-gpu-operator \\ mobb/operatorhub --install \\ --values https://raw.githubusercontent.com/rh-mobb/helm-charts/main/charts/nvidia-gpu/files/operatorhub.yaml Wait until the two operators are running oc wait --for=jsonpath=\u0026#39;{.status.replicas}\u0026#39;=1 deployment \\ nfd-controller-manager -n openshift-nfd --timeout=600s oc wait --for=jsonpath=\u0026#39;{.status.replicas}\u0026#39;=1 deployment \\ gpu-operator -n nvidia-gpu-operator --timeout=600s Install the Nvidia GPU Operator chart helm upgrade --install -n nvidia-gpu-operator nvidia-gpu \\ mobb/nvidia-gpu --disable-openapi-validation Skip past Option 2 – Manually to Validate GPU Option 2 – Manually Create Nvidia namespace cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: nvidia-gpu-operator EOF Create Operator Group cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: nvidia-gpu-operator-group namespace: nvidia-gpu-operator spec: targetNamespaces: - nvidia-gpu-operator EOF Get latest nvidia channel CHANNEL=$(oc get packagemanifest gpu-operator-certified -n openshift-marketplace -o jsonpath=\u0026#39;{.status.defaultChannel}\u0026#39;) Get latest nvidia package PACKAGE=$(oc get packagemanifests/gpu-operator-certified -n openshift-marketplace -ojson | jq -r \u0026#39;.status.channels[] | select(.name == \u0026#34;\u0026#39;$CHANNEL\u0026#39;\u0026#34;) | .currentCSV\u0026#39;) Create Subscription envsubst \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: gpu-operator-certified namespace: nvidia-gpu-operator spec: channel: \u0026#34;$CHANNEL\u0026#34; installPlanApproval: Automatic name: gpu-operator-certified source: certified-operators sourceNamespace: openshift-marketplace startingCSV: \u0026#34;$PACKAGE\u0026#34; EOF Wait for Operator to finish installing Install Node Feature Discovery Operator Set up Namespace cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nfd EOF Create OperatorGroup cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd EOF Create Subscription cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \u0026#34;stable\u0026#34; installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace EOF Wait for Node Feature discovery to complete installation\nCreate NFD Instance\ncat \u0026lt;\u0026lt;EOF | oc apply -f - kind: NodeFeatureDiscovery apiVersion: nfd.openshift.io/v1 metadata: name: nfd-instance namespace: openshift-nfd spec: customConfig: configData: {} operand: image: \u0026gt;- registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:07658ef3df4b264b02396e67af813a52ba416b47ab6e1d2d08025a350ccd2b7b servicePort: 12000 workerConfig: configData: | core: sleepInterval: 60s sources: pci: deviceClassWhitelist: - \u0026#34;0200\u0026#34; - \u0026#34;03\u0026#34; - \u0026#34;12\u0026#34; deviceLabelFields: - \u0026#34;vendor\u0026#34; EOF Apply nVidia Cluster Config Apply cluster config cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: nvidia.com/v1 kind: ClusterPolicy metadata: name: gpu-cluster-policy spec: migManager: enabled: true operator: defaultRuntime: crio initContainer: {} runtimeClass: nvidia deployGFD: true dcgm: enabled: true gfd: {} dcgmExporter: config: name: \u0026#39;\u0026#39; driver: licensingConfig: nlsEnabled: false configMapName: \u0026#39;\u0026#39; certConfig: name: \u0026#39;\u0026#39; kernelModuleConfig: name: \u0026#39;\u0026#39; repoConfig: configMapName: \u0026#39;\u0026#39; virtualTopology: config: \u0026#39;\u0026#39; enabled: true use_ocp_driver_toolkit: true devicePlugin: {} mig: strategy: single validator: plugin: env: - name: WITH_WORKLOAD value: \u0026#39;true\u0026#39; nodeStatusExporter: enabled: true daemonsets: {} toolkit: enabled: true EOF Validate GPU Verify NFD can see your GPU(s) oc describe node | egrep \u0026#39;Roles|pci-10de\u0026#39; | grep -v master You should see output like:\nRoles: worker feature.node.kubernetes.io/pci-10de.present=true Verify node labels oc get node -l nvidia.com/gpu.present Wait until Cluster Policy is ready oc wait --for=jsonpath=\u0026#39;{.status.state}\u0026#39;=ready clusterpolicy \\ gpu-cluster-policy -n nvidia-gpu-operator --timeout=600s Nvidia SMI tool verification oc project nvidia-gpu-operator for i in $(oc get pod -lopenshift.driver-toolkit=true --no-headers |awk \u0026#39;{print $1}\u0026#39;); do echo $i; oc exec -it $i -- nvidia-smi ; echo -e \u0026#39;\\n\u0026#39; ; done Create Pod to run a GPU workload oc project nvidia-gpu-operator cat \u0026lt;\u0026lt;EOF | oc apply -f - apiVersion: v1 kind: Pod metadata: name: cuda-vector-add spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: \u0026#34;quay.io/giantswarm/nvidia-gpu-demo:latest\u0026#34; resources: limits: nvidia.com/gpu: 1 nodeSelector: nvidia.com/gpu.present: true EOF View logs oc logs cuda-vector-add --tail=-1 You should see Output like the following:\n[Vector addition of 5000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done If successful, the pod can be deleted oc delete pod cuda-vector-add ","permalink":"https://rmmartins.com/2024/08/08/aro-with-nvidia-gpu-workloads/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/gpu/\"\u003eARO with Nvidia GPU Workloads | Red Hat Cloud Experts\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eARO guide to running Nvidia GPU workloads.\u003c/p\u003e\n\u003ch2 id=\"prerequisites\"\u003ePrerequisites\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eoc cli\u003c/li\u003e\n\u003cli\u003eHelm\u003c/li\u003e\n\u003cli\u003ejq, moreutils, and gettext package\u003c/li\u003e\n\u003cli\u003eAn \u003ca href=\"https://cloud.redhat.com/experts/aro/terraform-install\"\u003eARO 4.14 cluster\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u003cstrong\u003eNote:\u003c/strong\u003e If you need to install an ARO cluster, please read our \u003ca href=\"https://cloud.redhat.com/experts/aro/terraform-install\"\u003eARO Terraform Install Guide\u003c/a\u003e. Please be sure if you\u0026rsquo;re installing or using an existing ARO cluster that it is 4.14.x or higher.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u003cstrong\u003eNote:\u003c/strong\u003e Please ensure your ARO cluster was created with a valid pull secret (to verify make sure you can see the Operator Hub in the cluster\u0026rsquo;s console). If not, you can follow \u003ca href=\"https://cloud.redhat.com/experts/aro/pull-secret\"\u003ethese\u003c/a\u003e instructions.\u003c/p\u003e","title":"ARO with Nvidia GPU Workloads"},{"content":"This article was originally published at What to consider when using Azure AD as IDP? | Red Hat Cloud Experts\nIn this guide, we will discuss key considerations when using Azure Active Directory (AAD) as the Identity Provider (IDP) for your ARO or ROSA cluster. Below are some helpful references:\nConfigure ARO to Use Azure AD Configuring IDP for ROSA, OSD, and ARO Default Access for All Users in Azure Active Directory Once you set up AAD as the IDP for your cluster, it\u0026rsquo;s important to note that by default, all users in your Azure Active Directory instance will have access to the cluster. They can log in using their AAD credentials through the OpenShift Web Console endpoint:\nHowever, for security purposes, it\u0026rsquo;s recommended to restrict access and only allow specific users who are assigned to access the cluster.\nRestricting Access To implement access restrictions, follow these steps:\nLog in to the Azure Portal and navigate to your AAD instance.\nUnder Enterprise applications, select the application created for the ARO IDP configuration.\nIn the selected Enterprise application, go to Properties and switch the \u0026ldquo;Assignment required?\u0026rdquo; option to YES. If you attempt to log in at this point, you will receive a denial error: Enter your username:\nEnter your password:\nThe error message indicates that only users specifically granted access to the application are allowed:\nTo allow access, go to Users and groups in the main blade, click + Add user/group, and add the desired users/groups who should have access to the ARO cluster. Search for the desired user/group and click Select.\nVerify that the user has been assigned:\nYou should now be able to log in with the specified user/group to your cluster: Enter your username:\nEnter your password:\nYou will then be logged in:\nApproval Workflow If you receive a message like the one below, it means that your AAD has the admin consent workflow enabled:\nIn this case, you will need to request and wait for approval from your AAD domain admin. To request access, fill out the request form:\nAnd wait for approval:\nSelf-Approval Process If you have administrative privileges, you can self-approve the request by following these steps:\nPlease note that these steps are based on the official guidance from Microsoft, which is available here.\nGo to your Azure Active Directory Tenant \u0026gt; Enterprise Applications \u0026gt; Admin Consent Requests \u0026gt; All (Preview): Select the application (openshift, in this case) and click Review permissions and consent: A new window will open, prompting you to log in with credentials of an admin with permissions: Click Accept to consent to the permission: You will then see that the request was approved:\nNow you will be able to log in through the AAD option:\nEnter your username:\nEnter your password:\nIt worked!\nAs a best practice, we recommend removing the kubeadmin user after setting up an identity provider. You can find instructions on how to do this here.\nUsing the Group Sync Operator Integrating groups from external identity providers with OpenShift, such as synchronizing groups from AAD, can be a valuable feature to enhance your system\u0026rsquo;s functionality. To accomplish this, you can leverage the usage of the Group Sync Operator.\nWe have published a comprehensive how-to guide that walks you through the process, accessible here. By following these instructions, you\u0026rsquo;ll be able to seamlessly synchronize AAD groups into your OpenShift environment, optimizing your workflow and streamlining access management.\n","permalink":"https://rmmartins.com/2024/05/24/what-to-consider-when-using-azure-ad-as-idp/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/idp/considerations-aad-ipd/\"\u003eWhat to consider when using Azure AD as IDP? | Red Hat Cloud Experts\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eIn this guide, we will discuss key considerations when using Azure Active Directory (AAD) as the Identity Provider (IDP) for your ARO or ROSA cluster. Below are some helpful references:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://cloud.redhat.com/experts/idp/azuread-aro/\"\u003eConfigure ARO to Use Azure AD\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://cloud.redhat.com/experts/idp/azuread/\"\u003eConfiguring IDP for ROSA, OSD, and ARO\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"default-access-for-all-users-in-azure-active-directory\"\u003eDefault Access for All Users in Azure Active Directory\u003c/h2\u003e\n\u003cp\u003eOnce you set up AAD as the IDP for your cluster, it\u0026rsquo;s important to note that by default, all users in your Azure Active Directory instance will have access to the cluster. They can log in using their AAD credentials through the OpenShift Web Console endpoint:\u003c/p\u003e","title":"What to Consider When Using Azure AD as IDP"},{"content":"Great! You just started your Azure journey and now it\u0026rsquo;s time to scale your infrastructure to meet the growing demands of your business. Microsoft Azure offers a robust cloud platform that can grow with you, but where do you begin? This article will introduce you to three fundamental building blocks for your Azure journey: Azure Subscriptions, Microsoft Entra ID (formerly Azure Active Directory), and Azure Enterprise Scale Landing Zones.\nUnderstanding the Basics Microsoft Entra ID (Former Azure Active Directory) Microsoft Entra ID, previously known as Azure Active Directory (Azure AD), is the backbone of identity and access management in Azure. It is a cloud-based identity and access management service that provides:\nSingle Sign-On (SSO): Allowing users to access multiple applications with one set of login credentials. Multi-Factor Authentication (MFA): Enhancing security by requiring multiple forms of verification. Conditional Access: Implementing policies that manage access to applications based on user conditions. Identity Protection: Detecting and responding to suspicious activities related to user identities. Leveraging Microsoft Entra ID ensures secure and streamlined access to your resources, helping manage user identities efficiently as your organization scales.\nReferences:\nWhat is Microsoft Entra ID Relationship between Entra ID Tenant and Azure Subscriptions Microsoft Entra Roles Azure Subscriptions An Azure Subscription is a container that holds your Azure resources. It\u0026rsquo;s linked to an Azure account and is the unit of management, billing, and control for resources in Azure. Key aspects include:\nResource Management: Organizing and managing resources like virtual machines, databases, and storage accounts. Billing: Tracking usage and costs associated with the resources consumed. Access Control: Setting permissions and policies for resource access and management. You can benefit from multiple subscriptions to separate environments (e.g., development, testing, production) or different projects, ensuring clear boundaries and cost management.\nReferences:\nSubscriptions considerations and recommendations Create your initial Azure subscriptions Create additional subscriptions to scale your environment Azure subscription purposes Azure RBAC roles How Entra Roles and Azure RBAC roles are related Azure Fundamental Concepts Enterprise Scale Landing Zones The Enterprise-Scale architecture is a modular design that allow organizations to start with foundational landing zones that support their application portfolios, regardless of whether the applications are being migrated or are newly developed and deployed to Azure. The architecture enables organizations to start as small as needed and scale alongside their business requirements regardless of scale point.\nEnterprise Scale Landing Zones are predefined, best-practice architecture recommendations designed to help organizations set up their Azure environments in a standardized and scalable way. They address critical areas such as:\nSecurity: Implementing security controls and compliance requirements. Networking: Establishing a robust network topology. Governance: Defining policies, management groups, and resource tagging. Operations: Setting up monitoring, backup, and disaster recovery solutions. Adopting Enterprise Scale Landing Zones from the beginning can help avoid common pitfalls and ensure your Azure environment is ready to scale with your growth.\nReferences:\nWhat is an Azure Landing Zone? Landing zone implementation options Reference Implementation Deploying Azure Landing Zones How These Components Interrelate Identity and Access Management with Microsoft Entra ID At the core of any secure cloud environment is the management of identities and access. Microsoft Entra ID provides the necessary tools to manage who can access your Azure resources and under what conditions. By integrating Microsoft Entra ID with your Azure Subscription, you can:\nControl Access: Define roles and permissions for users and groups, ensuring the right people have the right level of access. Implement Policies: Use Conditional Access and other security policies to protect your resources. Monitor Activities: Track user activities and identify potential security risks with Identity Protection. Structuring Your Environment with Azure Subscriptions Azure Subscriptions act as the primary structure within which all your Azure resources reside. By strategically using subscriptions, you can:\nSeparate Concerns: Use different subscriptions for development, testing, and production to isolate environments. Manage Costs: Track spending per subscription to ensure budget adherence and optimize costs. Apply Governance: Implement subscription-level policies and compliance checks to maintain a controlled and compliant environment. Scaling with Enterprise Scale Landing Zones Enterprise Scale Landing Zones provide a blueprint for setting up your Azure environment. They guide you through best practices and ensure your setup is ready for enterprise-scale operations. Key benefits include:\nStandardized Architecture: Follow best-practice templates to avoid misconfigurations and ensure consistency. Enhanced Security: Implement robust security measures and compliance from the outset. Scalability: Design your environment to scale seamlessly as your startup grows. Why This Foundation Matters Cost management, security, and scalability are paramount concerns. Here\u0026rsquo;s how Azure Subscriptions, Microsoft Entra ID, and Enterprise Scale Landing Zones can empower your journey:\nCost-Effectiveness: Organized subscriptions enable efficient cost tracking, allowing you to optimize spending and stay within budget. Robust Security: Control user access and implement advanced security measures to safeguard your data and applications. Effortless Scaling: Azure subscriptions and Entra ID can effortlessly adapt to accommodate your growing user base and resource demands. While Azure Enterprise Scale Landing Zones might be overkill, it\u0026rsquo;s a valuable concept to keep in mind. These architectures provide a secure and standardized foundation for large deployments. As your company scales, understanding Landing Zones can help you plan your future infrastructure effectively.\nYour First Steps on Azure Ready to embark on your Azure adventure? Here\u0026rsquo;s what you can do:\nCreate an Azure Subscription: Sign up for a free Azure account and create your first subscription. Explore Azure Services: Dive into the vast library of Azure services to discover solutions that match your specific needs. Master Entra ID: Learn how to configure Entra ID to manage user access and secure your applications. Plan for the Future: As your company grows, consider exploring Azure Enterprise Scale Landing Zones for a more robust infrastructure. Remember, Microsoft Azure offers a wealth of resources to guide you on your cloud journey. Whether you\u0026rsquo;re a seasoned IT professional or a startup founder just getting started, Microsoft has the tools and support to help your business thrive.\nThis blog post is just the beginning. Stay tuned for future articles where we\u0026rsquo;ll delve deeper into specific Azure services and explore how they can empower your success!\n","permalink":"https://rmmartins.com/2024/05/20/building-a-secure-and-scalable-foundation-for-your-environment-on-azure/","summary":"\u003cp\u003eGreat! You just started your Azure journey and now it\u0026rsquo;s time to scale your infrastructure to meet the growing demands of your business. Microsoft Azure offers a robust cloud platform that can grow with you, but where do you begin? This article will introduce you to three fundamental building blocks for your Azure journey: Azure Subscriptions, Microsoft Entra ID (formerly Azure Active Directory), and Azure Enterprise Scale Landing Zones.\u003c/p\u003e\n\u003ch2 id=\"understanding-the-basics\"\u003eUnderstanding the Basics\u003c/h2\u003e\n\u003ch3 id=\"microsoft-entra-id-former-azure-active-directory\"\u003eMicrosoft Entra ID (Former Azure Active Directory)\u003c/h3\u003e\n\u003cp\u003eMicrosoft Entra ID, previously known as Azure Active Directory (Azure AD), is the backbone of identity and access management in Azure. It is a cloud-based identity and access management service that provides:\u003c/p\u003e","title":"Building a Secure and Scalable Foundation for Your Environment on Azure"},{"content":"Introduction: In the realm of cloud computing, optimizing costs is paramount for businesses leveraging Microsoft Azure. Azure offers two primary cost-saving mechanisms: Azure Reservations and Azure Savings Plans. Both options come with distinct advantages, disadvantages, and usage scenarios. In this comprehensive guide, we\u0026rsquo;ll explore these features, penalties, and ideal use cases to empower you in making informed decisions tailored to your business needs.\nUnderstanding Azure Reservations: Azure Reservations provide businesses the opportunity to commit to one-year or three-year plans for various products within the Azure ecosystem. The commitment entails a promise of usage, enabling significant discounts of up to 72% off pay-as-you-go prices.\nAdvantages: Cost Savings: With Azure Reservations, businesses can realize substantial reductions in resource costs, providing a predictable expenditure model. Billing Discount: The billing discount is seamlessly applied to matching resources post-purchase, ensuring immediate cost benefits. Automatic Application: Once acquired, the discount automatically integrates with corresponding resources, streamlining management. Drawbacks: Limited Flexibility: Azure Reservations are optimized for stable and predictable workloads. Dynamic or evolving usage patterns may not fully leverage the benefits. Resource Specificity: Reservations are tied to specific compute instance families and regions, limiting adaptability. Penalties: Use-it-or-Lose-it: Failure to utilize reserved resources results in forfeiture, potentially leading to inefficiencies. Cancellation Limitations: Azure imposes restrictions on cancellations and exchanges, necessitating careful planning. (Azure Reservations Exchange Policy) Ideal Use Cases: Azure Reservations excel in scenarios characterized by consistent, uninterrupted workloads with minimal variation in resource requirements or geographic distribution.\nUnpacking Azure Savings Plans: Azure Savings Plans offer a more flexible approach to cost savings, catering to dynamic and evolving workloads. Businesses commit to a fixed hourly spend for one or three years, unlocking savings of up to 65% on eligible compute usage costs.\nAdvantages: Flexible Savings: Savings Plans extend benefits across a wide spectrum of compute resources, providing versatility in cost optimization. Global Application: Savings Plans apply globally, accommodating diverse workloads across different regions and instance families. Drawbacks: Limited Scope: Savings Plans are restricted to compute costs, excluding other expenses like storage, network, and licensing. Non-Cancellable Commitment: Unlike reservations, Savings Plans purchases are final, lacking flexibility for cancellation or exchange. (Canceling Azure Savings Plans) Penalties: Non-Cancellable Commitment: Once purchased, Savings Plans cannot be cancelled, necessitating thorough evaluation before acquisition. Ideal Use Cases: Azure Savings Plans are tailor-made for organizations with fluctuating workloads, leveraging varied instance families, compute services, or spanning multiple datacenter regions.\nConclusion: Choosing between Azure Reservations and Azure Savings Plans hinges on a nuanced understanding of your workload characteristics and anticipated usage patterns. Azure Reservations suit scenarios of stable, predictable workloads, while Azure Savings Plans offer flexibility for dynamic environments. By carefully evaluating the advantages, disadvantages, and penalties associated with each option, businesses can maximize cost efficiency and optimize their Azure expenditure effectively.\n","permalink":"https://rmmartins.com/2024/05/15/maximizing-cost-efficiency-in-azure-navigating-azure-reservations-and-savings-plans/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction:\u003c/h2\u003e\n\u003cp\u003eIn the realm of cloud computing, optimizing costs is paramount for businesses leveraging Microsoft Azure. Azure offers two primary cost-saving mechanisms: \u003ca href=\"https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations\"\u003eAzure Reservations\u003c/a\u003e and \u003ca href=\"https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/savings-plan-compute-overview\"\u003eAzure Savings Plans\u003c/a\u003e. Both options come with distinct advantages, disadvantages, and usage scenarios. In this comprehensive guide, we\u0026rsquo;ll explore these features, penalties, and ideal use cases to empower you in making informed decisions tailored to your business needs.\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"https://github.com/ricmmartins/rmmartinscom/raw/master/assets/images/cloud-costs.jpeg\"\u003e\u003c/p\u003e\n\u003ch2 id=\"understanding-azure-reservations\"\u003eUnderstanding Azure Reservations:\u003c/h2\u003e\n\u003cp\u003eAzure Reservations provide businesses the opportunity to commit to one-year or three-year plans for various products within the Azure ecosystem. The commitment entails a promise of usage, enabling significant discounts of up to 72% off pay-as-you-go prices.\u003c/p\u003e","title":"Maximizing Cost Efficiency in Azure: Navigating Azure Reservations and Savings Plans"},{"content":"As I embark on my journey of learning about artificial intelligence (AI), I am discovering the fascinating world of large language models (LLMs) and their applications in various technologies. In this article, I aim to share my newfound knowledge and insights with others who are also beginning their journey in AI. We will explore OpenAI, one of the leading organizations in AI research and development, and compare its offerings with Microsoft\u0026rsquo;s Azure OpenAI service.\nUnderstanding OpenAI OpenAI is a prominent research organization known for developing advanced LLMs, such as the Generative Pre-trained Transformer (GPT) series. Notable models include ChatGPT and GPT-4, which are designed to handle conversational tasks, offer fine-tuning for improved performance, and prioritize user data privacy and ethical usage.\nOpenAI Services and Offerings LLM Services: OpenAI offers its LLMs as paid services through subscription-based APIs. While there may be limited free access to earlier models, most LLMs are commercial services. Closed Model: OpenAI follows a closed model for its recent offerings, such as GPT-4, providing access primarily through APIs and subscriptions. OpenAI Tools: OpenAI has open-sourced certain tools and resources, such as OpenAI Baselines for reinforcement learning algorithms and OpenAI Gym, a toolkit for developing and comparing RL algorithms. ChatGPT: OpenAI\u0026rsquo;s Conversational AI Model ChatGPT is OpenAI\u0026rsquo;s conversational AI model that leverages its LLMs, such as GPT-3 and GPT-4. It is designed to handle conversational context, provide coherent and contextually relevant responses, and ensure user data privacy and ethical usage.\nIntroduction to Azure OpenAI Azure OpenAI is a service provided by Microsoft that integrates OpenAI\u0026rsquo;s LLMs into the Microsoft Azure platform. This integration allows Azure customers to use OpenAI\u0026rsquo;s models within the Azure environment, offering a seamless and secure experience.\nFeatures of Azure OpenAI First-Party Service: Azure OpenAI acts as a first-party service within the Azure ecosystem, providing managed identity, private endpoints, security, and network integration. Responsible AI: Microsoft\u0026rsquo;s Responsible AI services provide a common framework for responsible AI use and protection against bad actors manipulating data. Deployment and Regions: Azure OpenAI allows users to deploy OpenAI\u0026rsquo;s models across all Azure regions, leveraging Azure\u0026rsquo;s enterprise-grade infrastructure. Pricing: Pricing and billing are integrated into the existing Azure billing system, allowing customers to pay per token used and manage costs easily. Support and Documentation: Azure OpenAI leverages Microsoft\u0026rsquo;s extensive support and documentation for integrating OpenAI models into Azure applications. Comparing OpenAI and Azure OpenAI Both OpenAI and Azure OpenAI provide access to OpenAI\u0026rsquo;s LLMs, but there are key differences between the two:\nPlatform: OpenAI offers its models directly through its own API, while Azure OpenAI integrates OpenAI\u0026rsquo;s models into the Microsoft Azure platform. Integration and Ecosystem: OpenAI\u0026rsquo;s API can be integrated independently, offering a standalone service. Azure OpenAI is integrated into the broader Azure ecosystem, allowing customers to combine OpenAI\u0026rsquo;s models with other Azure services. Data Privacy and Security: While OpenAI has its own data privacy and security measures, Azure OpenAI adheres to Microsoft\u0026rsquo;s robust security and compliance standards. Availability and Access: OpenAI\u0026rsquo;s API is available worldwide, while Azure OpenAI is available to Azure customers and may offer additional compliance benefits. Pricing and Billing: OpenAI has its own pricing and billing system, while Azure OpenAI\u0026rsquo;s pricing and billing are integrated into the existing Azure billing system. Support and Documentation: OpenAI provides its own support and documentation, while Azure OpenAI leverages Microsoft\u0026rsquo;s resources. Conclusion In summary, OpenAI offers direct access to its LLMs through its API, while Azure OpenAI integrates these models into the Microsoft Azure platform, providing a seamless and secure experience for Azure customers. Both services play a pivotal role in advancing AI technology and making LLMs accessible to a wide range of users.\nFor more information on Azure OpenAI, I recommend checking out the official documentation from Microsoft. Happy learning!\n","permalink":"https://rmmartins.com/2024/05/10/introduction-to-ai-and-comparing-openai-with-azure-openai/","summary":"\u003cp\u003eAs I embark on my journey of learning about artificial intelligence (AI), I am discovering the fascinating world of large language models (LLMs) and their applications in various technologies. In this article, I aim to share my newfound knowledge and insights with others who are also beginning their journey in AI. We will explore OpenAI, one of the leading organizations in AI research and development, and compare its offerings with Microsoft\u0026rsquo;s Azure OpenAI service.\u003c/p\u003e","title":"Introduction to AI and Comparing OpenAI with Azure OpenAI"},{"content":"As we continue our journey into artificial intelligence (AI), it\u0026rsquo;s important to understand how AI is transforming different industries and the ethical and legal challenges associated with its widespread adoption. In this new post, we will explore AI\u0026rsquo;s real-world applications and the complexities of ethical and legal concerns in detail.\nAI in Healthcare AI is making significant advances in healthcare, improving patient care and medical research:\nDiagnostics: AI-powered algorithms can analyze medical images, such as X-rays, MRIs, and CT scans, to identify diseases like cancer or fractures with high accuracy. These systems can serve as a second opinion for radiologists, improving diagnostic accuracy and efficiency. Personalized Medicine: AI enables the development of personalized treatment plans based on a patient\u0026rsquo;s unique genetic makeup. This approach can lead to more effective and targeted therapies, improving patient outcomes. Drug Discovery: AI accelerates the process of discovering new drugs by analyzing vast amounts of data to identify potential compounds and predict their efficacy. This reduces the time and cost associated with bringing new drugs to market. Remote Monitoring: AI-powered wearable devices and remote monitoring tools enable healthcare providers to track patients\u0026rsquo; health in real-time, offering proactive care and reducing hospital readmissions. Administrative Efficiency: AI streamlines administrative tasks such as scheduling, billing, and insurance claims processing, freeing up healthcare professionals to focus on patient care. AI in Finance AI is reshaping the finance industry by providing innovative solutions to complex problems:\nFraud Detection: AI algorithms can analyze transactional data in real-time to identify suspicious activity and potential fraud, helping institutions protect their customers and assets. Algorithmic Trading: AI-driven trading algorithms use data analysis and machine learning to make split-second decisions and execute trades at high speeds, optimizing returns and managing risk. Risk Management: AI helps financial institutions assess risk more accurately by analyzing historical data, market trends, and other factors. This can lead to better-informed lending and investment decisions. Customer Service: AI chatbots and virtual assistants provide quick and efficient customer support, handling routine inquiries and freeing up human agents for more complex tasks. AI in Transportation The transportation industry is leveraging AI to improve efficiency, safety, and sustainability:\nAutonomous Vehicles: AI-powered self-driving cars have the potential to revolutionize transportation, reducing accidents and traffic congestion while providing greater mobility for the elderly and disabled. Traffic Management: AI systems can analyze traffic data to optimize signal timings, reduce congestion, and improve traffic flow, making urban areas more efficient and sustainable. Logistics: AI can optimize routes, schedules, and inventory management, reducing transportation costs and improving delivery times for goods. AI in Retail AI is enhancing the retail experience for both businesses and consumers:\nPersonalization: AI-driven recommendation engines analyze customer data to provide personalized product suggestions, enhancing customer satisfaction and increasing sales. Inventory Management: AI can predict demand patterns and optimize inventory levels, reducing overstocking and stockouts and improving supply chain efficiency. Customer Engagement: AI chatbots offer customer support around the clock, answering questions and resolving issues quickly, leading to higher customer satisfaction. AI in Cybersecurity AI is becoming an essential tool in the fight against cyber threats:\nThreat Detection: AI algorithms can analyze network traffic and user behavior to identify anomalies and potential attacks, allowing for a rapid response to security breaches. Malware Analysis: AI-powered systems can detect and analyze new malware strains, providing insights into their behavior and potential impact. Security Automation: AI can automate routine security tasks such as patch management and vulnerability scanning, freeing up cybersecurity professionals to focus on more strategic challenges. AI in Education AI is transforming the education sector by offering personalized and efficient learning experiences:\nPersonalized Learning: AI-powered adaptive learning platforms tailor educational content to individual students\u0026rsquo; needs and learning styles, enhancing engagement and retention. Grading Automation: AI can grade assignments and exams, providing instant feedback to students and reducing the workload for teachers. Administrative Efficiency: AI streamlines administrative tasks such as scheduling and student record management, allowing educators to focus on teaching. Ethical and Legal Implications of AI As AI becomes more integrated into various aspects of society, ethical and legal challenges arise that must be addressed to ensure responsible and fair AI deployment:\nAlgorithmic Bias: AI systems can inadvertently perpetuate bias if trained on unrepresentative data. It\u0026rsquo;s crucial to develop diverse and inclusive datasets and continuously monitor AI models for bias. Data Privacy: AI relies on vast amounts of data, raising concerns about how data is collected, stored, and used. Organizations must prioritize data protection and adhere to privacy regulations. Transparency and Explainability: AI systems should be transparent and explainable, allowing users to understand how decisions are made and enabling oversight and accountability. Accountability and Responsibility: AI developers and organizations must take responsibility for the outcomes of AI systems, ensuring they are used ethically and do not cause harm. Conclusion AI\u0026rsquo;s impact across industries is transformative, presenting exciting opportunities and challenges. By exploring the practical applications of AI in healthcare, finance, transportation, retail, cybersecurity, and education, you gain a deeper understanding of its potential and the responsibilities that come with its deployment.\nAs we continue our journey in AI learning, it\u0026rsquo;s important to stay informed about the ethical and legal implications of AI to ensure that its development and deployment benefit society as a whole.\n","permalink":"https://rmmartins.com/2024/05/10/real-world-applications-and-ethical-implications-of-ai/","summary":"\u003cp\u003eAs we continue our journey into artificial intelligence (AI), it\u0026rsquo;s important to understand how AI is transforming different industries and the ethical and legal challenges associated with its widespread adoption. In this new post, we will explore AI\u0026rsquo;s real-world applications and the complexities of ethical and legal concerns in detail.\u003c/p\u003e\n\u003ch2 id=\"ai-in-healthcare\"\u003eAI in Healthcare\u003c/h2\u003e\n\u003cp\u003eAI is making significant advances in healthcare, improving patient care and medical research:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eDiagnostics\u003c/strong\u003e: AI-powered algorithms can analyze medical images, such as X-rays, MRIs, and CT scans, to identify diseases like cancer or fractures with high accuracy. These systems can serve as a second opinion for radiologists, improving diagnostic accuracy and efficiency.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePersonalized Medicine\u003c/strong\u003e: AI enables the development of personalized treatment plans based on a patient\u0026rsquo;s unique genetic makeup. This approach can lead to more effective and targeted therapies, improving patient outcomes.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eDrug Discovery\u003c/strong\u003e: AI accelerates the process of discovering new drugs by analyzing vast amounts of data to identify potential compounds and predict their efficacy. This reduces the time and cost associated with bringing new drugs to market.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eRemote Monitoring\u003c/strong\u003e: AI-powered wearable devices and remote monitoring tools enable healthcare providers to track patients\u0026rsquo; health in real-time, offering proactive care and reducing hospital readmissions.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAdministrative Efficiency\u003c/strong\u003e: AI streamlines administrative tasks such as scheduling, billing, and insurance claims processing, freeing up healthcare professionals to focus on patient care.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"ai-in-finance\"\u003eAI in Finance\u003c/h2\u003e\n\u003cp\u003eAI is reshaping the finance industry by providing innovative solutions to complex problems:\u003c/p\u003e","title":"Real-World Applications and Ethical Implications of AI"},{"content":"This article was originally published at Azure Front Door with ARO ( Azure Red Hat OpenShift ) | Red Hat Cloud Experts\nSecuring exposing an Internet facing application with a private ARO Cluster.\nWhen you create a cluster on ARO you have several options in making the cluster public or private. With a public cluster you are allowing Internet traffic to the api and *.apps endpoints. With a private cluster you can make either or both the api and .apps endpoints private.\nHow can you allow Internet access to an application running on your private cluster where the .apps endpoint is private? This document will guide you through using Azure Frontdoor to expose your applications to the Internet. There are several advantages of this approach, namely your cluster and all the resources in your Azure account can remain private, providing you an extra layer of security. Azure FrontDoor operates at the edge so we are controlling traffic before it even gets into your Azure account. On top of that, Azure FrontDoor also offers WAF and DDoS protection, certificate management and SSL Offloading just to name a few benefits.\nAdopted from ARO Reference Architecture\nPrerequisites az cli oc cli a custom domain a DNS zone that you can easily modify To build and deploy the application:\nmaven cli quarkus cli OpenJDK Java 8 Make sure to use the same terminal session while going through guide for all commands as we will reference environment variables set or created through the guide.\nGet Started Create a private ARO cluster. Follow this guide to Create a private ARO cluster or simply run this bash script Set Environment Variables Manually set environment variables AROCLUSTER=\u0026lt;cluster name\u0026gt; ARORG=\u0026lt;resource group for the cluster\u0026gt; AFD_NAME=\u0026lt;name you want to use for the front door instance\u0026gt; DOMAIN=\u0026#39;e.g. aro.kmobb.com\u0026#39; ARO_APP_FQDN=\u0026#39;e.g. minesweeper.aro.kmobb.com\u0026#39; AFD_MINE_CUSTOM_DOMAIN_NAME=\u0026#39;minesweeper-aro-kmobb-com\u0026#39; PRIVATEENDPOINTSUBNET_PREFIX=\u0026#39;10.0.6.0/24\u0026#39; PRIVATEENDPOINTSUBNET_NAME=\u0026#39;PrivateEndpoint-subnet\u0026#39; Set environment variables with Bash UNIQUEID=$RANDOM ARO_RGNAME=$(az aro show -n $AROCLUSTER -g $ARORG --query \u0026#34;clusterProfile.resourceGroupId\u0026#34; -o tsv | sed \u0026#39;s/.*\\///\u0026#39;) LOCATION=$(az aro show --name $AROCLUSTER --resource-group $ARORG --query location -o tsv) INTERNAL_LBNAME=$(az network lb list --resource-group $ARO_RGNAME --query \u0026#34;[? contains(name, \u0026#39;internal\u0026#39;)].name\u0026#34; -o tsv) WORKER_SUBNET_NAME=$(az aro show --name $AROCLUSTER --resource-group $ARORG --query \u0026#39;workerProfiles[0].subnetId\u0026#39; -o tsv | sed \u0026#39;s/.*\\///\u0026#39;) WORKER_SUBNET_ID=$(az aro show --name $AROCLUSTER --resource-group $ARORG --query \u0026#39;workerProfiles[0].subnetId\u0026#39; -o tsv) VNET_NAME=$(az network vnet list -g $ARORG --query \u0026#39;[0].name\u0026#39; -o tsv) LBCONFIG_ID=$(az network lb frontend-ip list -g $ARO_RGNAME --lb-name $INTERNAL_LBNAME --query \u0026#34;[? contains(subnet.id,\u0026#39;$WORKER_SUBNET_ID\u0026#39;)].id\u0026#34; -o tsv) LBCONFIG_IP=$(az network lb frontend-ip list -g $ARO_RGNAME --lb-name $INTERNAL_LBNAME --query \u0026#34;[? contains(subnet.id,\u0026#39;$WORKER_SUBNET_ID\u0026#39;)].privateIPAddress\u0026#34; -o tsv) Create a Private Link Service After we have the cluster up and running, we need to create a private link service. The private link service will provide private and secure connectivity between the Front Door Service and our cluster.\nDisable the worker subnet private link service network policy for the worker subnet az network vnet subnet update \\ --disable-private-link-service-network-policies true \\ --name $WORKER_SUBNET_NAME \\ --resource-group $ARORG \\ --vnet-name $VNET_NAME Create a private link service targeting the worker subnets az network private-link-service create \\ --name $AROCLUSTER-pls \\ --resource-group $ARORG \\ --private-ip-address-version IPv4 \\ --private-ip-allocation-method Dynamic \\ --vnet-name $VNET_NAME \\ --subnet $WORKER_SUBNET_NAME \\ --lb-frontend-ip-configs $LBCONFIG_ID privatelink_id=$(az network private-link-service show -n $AROCLUSTER-pls -g $ARORG --query \u0026#39;id\u0026#39; -o tsv) Create and Configure an instance of Azure Front Door Create a Front Door Instance az afd profile create \\ --resource-group $ARORG \\ --profile-name $AFD_NAME \\ --sku Premium_AzureFrontDoor afd_id=$(az afd profile show -g $ARORG --profile-name $AFD_NAME --query \u0026#39;id\u0026#39; -o tsv) Create an endpoint for the ARO Internal Load Balancer az afd endpoint create \\ --resource-group $ARORG \\ --enabled-state Enabled \\ --endpoint-name \u0026#39;aro-ilb\u0026#39;$UNIQUEID \\ --profile-name $AFD_NAME Create a Front Door Origin Group that will point to the ARO Internal Loadbalancer az afd origin-group create \\ --origin-group-name \u0026#39;afdorigin\u0026#39; \\ --probe-path \u0026#39;/\u0026#39; \\ --probe-protocol Http \\ --probe-request-type GET \\ --probe-interval-in-seconds 100 \\ --profile-name $AFD_NAME \\ --resource-group $ARORG \\ --probe-interval-in-seconds 120 \\ --sample-size 4 \\ --successful-samples-required 3 \\ --additional-latency-in-milliseconds 50 Create a Front Door Origin with the above Origin Group that will point to the ARO Internal Loadbalancer az afd origin create \\ --enable-private-link true \\ --private-link-resource $privatelink_id \\ --private-link-location $LOCATION \\ --private-link-request-message \u0026#39;Private link service from AFD\u0026#39; \\ --weight 1000 \\ --priority 1 \\ --http-port 80 \\ --https-port 443 \\ --origin-group-name \u0026#39;afdorigin\u0026#39; \\ --enabled-state Enabled \\ --host-name $LBCONFIG_IP \\ --origin-name \u0026#39;afdorigin\u0026#39; \\ --profile-name $AFD_NAME \\ --resource-group $ARORG Approve the private link connection privatelink_pe_id=$(az network private-link-service show -n $AROCLUSTER-pls -g $ARORG --query \u0026#39;privateEndpointConnections[0].id\u0026#39; -o tsv) az network private-endpoint-connection approve \\ --description \u0026#39;Approved\u0026#39; \\ --id $privatelink_pe_id Add your custom domain to Azure Front Door az afd custom-domain create \\ --certificate-type ManagedCertificate \\ --custom-domain-name $AFD_MINE_CUSTOM_DOMAIN_NAME \\ --host-name $ARO_APP_FQDN \\ --minimum-tls-version TLS12 \\ --profile-name $AFD_NAME \\ --resource-group $ARORG Create an Azure Front Door endpoint for your custom domain az afd endpoint create \\ --resource-group $ARORG \\ --enabled-state Enabled \\ --endpoint-name \u0026#39;aro-mine-\u0026#39;$UNIQUEID \\ --profile-name $AFD_NAME Add an Azure Front Door route for your custom domain az afd route create \\ --endpoint-name \u0026#39;aro-mine-\u0026#39;$UNIQUEID \\ --forwarding-protocol HttpOnly \\ --https-redirect Disabled \\ --origin-group \u0026#39;afdorigin\u0026#39; \\ --profile-name $AFD_NAME \\ --resource-group $ARORG \\ --route-name \u0026#39;aro-mine-route\u0026#39; \\ --supported-protocols Http Https \\ --patterns-to-match \u0026#39;/*\u0026#39; \\ --custom-domains $AFD_MINE_CUSTOM_DOMAIN_NAME Update DNS - Get a validation token from Front Door afdToken=$(az afd custom-domain show \\ --resource-group $ARORG \\ --profile-name $AFD_NAME \\ --custom-domain-name $AFD_MINE_CUSTOM_DOMAIN_NAME \\ --query \u0026#34;validationProperties.validationToken\u0026#34; \\ -o tsv) Create a DNS Zone az network dns zone create -g $ARORG -n $DOMAIN You will need to configure your nameservers to point to azure. The output of running this zone create will show you the nameservers for this record that you will need to set up within your domain registrar.\nCreate a new text record in your DNS server\naz network dns record-set txt add-record -g $ARORG -z $DOMAIN -n _dnsauth.$(echo $ARO_APP_FQDN | sed \u0026#39;s/\\..*//\u0026#39;) --value $afdToken --record-set-name _dnsauth.$(echo $ARO_APP_FQDN | sed \u0026#39;s/\\..*//\u0026#39;) Check if the domain has been validated: Note this can take several hours. Your FQDN will not resolve until Front Door validates your domain.\naz afd custom-domain list -g $ARORG --profile-name $AFD_NAME --query \u0026#34;[? contains(hostName, \u0026#39;$ARO_APP_FQDN\u0026#39;)].domainValidationState\u0026#34; Add a CNAME record to DNS Get the Azure Front Door endpoint:\nafdEndpoint=$(az afd endpoint show -g $ARORG --profile-name $AFD_NAME --endpoint-name aro-mine-$UNIQUEID --query \u0026#34;hostName\u0026#34; -o tsv) Create a cname record for the application\naz network dns record-set cname set-record -g $ARORG -z $DOMAIN \\ -n $(echo $ARO_APP_FQDN | sed \u0026#39;s/\\..*//\u0026#39;) -z $DOMAIN -c $afdEndpoint Deploy an application Now the fun part, let\u0026rsquo;s deploy an application! We will be deploying a Java based application called microsweeper. This is an application that runs on OpenShift and uses a PostgreSQL database to store scores. With ARO being a first class service on Azure, we will create an Azure Database for PostgreSQL service and connect it to our cluster with a private endpoint.\nCreate an Azure Database for PostgreSQL servers service az postgres server create --name microsweeper-database --resource-group $ARORG --location $LOCATION --admin-user quarkus --admin-password r3dh4t1! --sku-name GP_Gen5_2 POSTGRES_ID=$(az postgres server show -n microsweeper-database -g $ARORG --query \u0026#39;id\u0026#39; -o tsv) Create a private endpoint connection for the database az network vnet subnet create \\ --resource-group $ARORG \\ --vnet-name $VNET_NAME \\ --name $PRIVATEENDPOINTSUBNET_NAME \\ --address-prefixes $PRIVATEENDPOINTSUBNET_PREFIX \\ --disable-private-endpoint-network-policies true az network private-endpoint create \\ --name \u0026#39;postgresPvtEndpoint\u0026#39; \\ --resource-group $ARORG \\ --vnet-name $VNET_NAME \\ --subnet $PRIVATEENDPOINTSUBNET_NAME \\ --private-connection-resource-id $POSTGRES_ID \\ --group-id \u0026#39;postgresqlServer\u0026#39; \\ --connection-name \u0026#39;postgresdbConnection\u0026#39; Create and configure a private DNS Zone for the Postgres database az network private-dns zone create \\ --resource-group $ARORG \\ --name \u0026#39;privatelink.postgres.database.azure.com\u0026#39; az network private-dns link vnet create \\ --resource-group $ARORG \\ --zone-name \u0026#39;privatelink.postgres.database.azure.com\u0026#39; \\ --name \u0026#39;PostgresDNSLink\u0026#39; \\ --virtual-network $VNET_NAME \\ --registration-enabled false az network private-endpoint dns-zone-group create \\ --resource-group $ARORG \\ --name \u0026#39;PostgresDb-ZoneGroup\u0026#39; \\ --endpoint-name \u0026#39;postgresPvtEndpoint\u0026#39; \\ --private-dns-zone \u0026#39;privatelink.postgres.database.azure.com\u0026#39; \\ --zone-name \u0026#39;postgresqlServer\u0026#39; NETWORK_INTERFACE_ID=$(az network private-endpoint show --name postgresPvtEndpoint --resource-group $ARORG --query \u0026#39;networkInterfaces[0].id\u0026#39; -o tsv) POSTGRES_IP=$(az resource show --ids $NETWORK_INTERFACE_ID --api-version 2019-04-01 --query \u0026#39;properties.ipConfigurations[0].properties.privateIPAddress\u0026#39; -o tsv) az network private-dns record-set a create --name $UNIQUEID-microsweeper-database --zone-name privatelink.postgres.database.azure.com --resource-group $ARORG az network private-dns record-set a add-record --record-set-name $UNIQUEID-microsweeper-database --zone-name privatelink.postgres.database.azure.com --resource-group $ARORG -a $POSTGRES_IP Create a postgres database that will contain scores for the minesweeper application az postgres db create \\ --resource-group $ARORG \\ --name score \\ --server-name microsweeper-database Deploy the minesweeper application Clone the git repository git clone https://github.com/rh-mobb/aro-workshop-app.git Change to the root directory cd aro-workshop-app Log into your openshift cluster Before you deploy your application, you will need to be connected to a private network that has access to the cluster.\nkubeadmin_password=$(az aro list-credentials --name $AROCLUSTER --resource-group $ARORG --query kubeadminPassword --output tsv) apiServer=$(az aro show -g $ARORG -n $AROCLUSTER --query apiserverProfile.url -o tsv) oc login $apiServer -u kubeadmin -p $kubeadmin_password Create a new OpenShift Project oc new-project minesweeper Add the openshift extension to quarkus quarkus ext add openshift Edit microsweeper-quarkus/src/main/resources/application.properties Make sure your file looks like the one below, changing the IP address on line 3 to the private IP address of your postgres instance.\n# Database configurations %prod.quarkus.datasource.db-kind=postgresql %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://10.1.6.9:5432/score %prod.quarkus.datasource.jdbc.driver=org.postgresql.Driver %prod.quarkus.datasource.username=quarkus@microsweeper-database %prod.quarkus.datasource.password=r3dh4t1! %prod.quarkus.hibernate-orm.database.generation=drop-and-create %prod.quarkus.hibernate-orm.database.generation=update # OpenShift configurations %prod.quarkus.kubernetes-client.trust-certs=true %prod.quarkus.kubernetes.deploy=true %prod.quarkus.kubernetes.deployment-target=openshift %prod.quarkus.openshift.build-strategy=docker Build and deploy the quarkus application to OpenShift quarkus build --no-tests Create a route to your custom domain cat \u0026lt;\u0026lt; EOF | oc apply -f - apiVersion: route.openshift.io/v1 kind: Route metadata: labels: app.kubernetes.io/name: microsweeper-appservice app.kubernetes.io/version: 1.0.0-SNAPSHOT app.openshift.io/runtime: quarkus name: microsweeper-appservice namespace: minesweeper spec: host: minesweeper.aro.kmobb.com to: kind: Service name: microsweeper-appservice weight: 100 targetPort: port: 8080 wildcardPolicy: None EOF Check the dns settings of your application nslookup $ARO_APP_FQDN Test the application Point your browser to your domain!\nClean up To clean up everything you created, simply delete the resource group\naz group delete -g $ARORG ","permalink":"https://rmmartins.com/2024/04/09/azure-front-door-with-aro-azure-red-hat-openshift/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/frontdoor/\"\u003eAzure Front Door with ARO ( Azure Red Hat OpenShift ) | Red Hat Cloud Experts\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eSecuring exposing an Internet facing application with a private ARO Cluster.\u003c/p\u003e\n\u003cp\u003eWhen you create a cluster on ARO you have several options in making the cluster public or private. With a public cluster you are allowing Internet traffic to the api and *.apps endpoints. With a private cluster you can make either or both the api and .apps endpoints private.\u003c/p\u003e","title":"Azure Front Door with ARO (Azure Red Hat OpenShift)"},{"content":"Introduction OpenShift, developed by Red Hat, extends Kubernetes to provide a more robust platform for deploying and managing containerized applications in a complete application platform. It integrates the core features of Kubernetes with additional tools and services to enhance developer productivity and operational efficiency. This guide aims to introduce beginners to deploying applications on OpenShift Local, a streamlined method to run OpenShift clusters locally for development and testing.\nUsing a local OpenShift environment, offers several benefits, especially for developers who are new to OpenShift or Kubernetes:\nSafe Learning Environment: It allows experimenting without the risk of affecting a production environment. This is crucial for beginners who are learning the ropes of container orchestration and application deployment. Cost-Effective: There\u0026rsquo;s no need for cloud resources, making it an economical solution for testing and development purposes. Convenience: Developers can easily test and debug their applications locally, which streamlines the development process. Several methods for setting up OpenShift locally include:\nOpenShift Local: This is the new name for CodeReady Containers. OpenShift Local is an official Red Hat solution to run OpenShift 4.x locally. It provides a straightforward way to create a single-node OpenShift 4 cluster. MiniShift: An older tool compared to CodeReady Containers. MiniShift was commonly used for running a single-node OpenShift cluster. It runs on top of a virtual machine and is suitable for development and testing purposes. MiniShift supports OpenShift 3.x versions. OKD: OKD is the Community Distribution of Kubernetes that powers Red Hat OpenShift. It offers more flexibility and can be used to set up a more extensive development environment than CodeReady Containers or MiniShift. However, setting up an OKD cluster is generally more complex. Containerized Development Environments: Some developers choose to use containerized development environments that mimic OpenShift\u0026rsquo;s behavior. Tools like Docker and Podman can be used to run OpenShift components in containers. This approach requires more manual setup and configuration. Each method has its benefits, depending on your project needs and system capabilities. In this post I\u0026rsquo;ll cover the usage of OpenShift Local.\nStep-by-Step Deployment on Local OpenShift To get started with OpenShift Local, download the crc tool from the Red Hat Console. If you don\u0026rsquo;t have a Red Hat account, you can create one for free with the Red Hat Developer program.\nStep 1: Start Your Local OpenShift Cluster Download OpenShift Local: Visit the OpenShift Local download page and download the version for your OS.\nInstall CodeReady Containers:\nExtract the downloaded file. Run the setup command: crc setup Start the OpenShift Cluster:\nInitialize the cluster: crc start This process may take several minutes. Access OpenShift Console:\nRetrieve the console URL and login details: crc console --credentials Step 2: Install the OpenShift CLI (oc) If you haven\u0026rsquo;t already installed the OpenShift CLI, download and install it from the Red Hat Console.\nStep 3: Authenticate to OpenShift Authenticate to your OpenShift cluster using the oc CLI, this will allow you to execute deployment commands:\noc login -u developer -p developer Step 4: Create a New Project A project in OpenShift is akin to a Kubernetes namespace but with additional management features. It\u0026rsquo;s a logical grouping that helps in resource organization, isolation, and multi-tenancy.\noc new-project my-php-project Step 5: Deploy the Application In this guide, we\u0026rsquo;ll deploy the ricmmartins/aro-demo-dryrun PHP application.\noc new-app https://github.com/ricmmartins/aro-demo-dryrun.git Step 6: Monitor the Deployment To monitor the deployment process, use:\noc status Step 7: Expose Your Application In OpenShift, a \u0026lsquo;route\u0026rsquo; is a powerful concept that exposes a service to an external host name.\noc expose svc/aro-demo-dryrun Step 8: Access the Application Use oc get route to find the URL of your application:\noc get route/aro-demo-dryrun Visit the URL in your browser to view your PHP application.\nConclusion Deploying an application on OpenShift Local is a beginner-friendly way to delve into the world of Kubernetes and container orchestration. This hands-on experience lays a solid foundation for more advanced OpenShift concepts and practices. As developers become more comfortable with OpenShift, they can explore its full potential in cloud environments, scaling, and managing complex, containerized applications.\n","permalink":"https://rmmartins.com/2023/12/08/deploying-an-application-on-openshift-local-a-beginners-guide/","summary":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eOpenShift, developed by Red Hat, extends Kubernetes to provide a more robust platform for deploying and managing containerized applications in a complete application platform. It integrates the core features of Kubernetes with additional tools and services to enhance developer productivity and operational efficiency. This guide aims to introduce beginners to deploying applications on OpenShift Local, a streamlined method to run OpenShift clusters locally for development and testing.\u003c/p\u003e\n\u003cp\u003eUsing a local OpenShift environment, offers several benefits, especially for developers who are new to OpenShift or Kubernetes:\u003c/p\u003e","title":"Deploying an Application on OpenShift Local: A Beginner's Guide"},{"content":"\nJust sharing an awesome learning resource I found recently. It will introduce you to the application development cycle leveraging OpenShift\u0026rsquo;s tooling \u0026amp; features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). You will get a brief introduction in several OpenShift features like OpenShift Pipelines, OpenShift GitOps, and OpenShift DevSpaces.\nCheck out at https://devsecops-workshop.github.io/\n","permalink":"https://rmmartins.com/2023/12/07/devsecops-workshop/","summary":"\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/wp-content/uploads/2025/04/image-1024x558.png\"\u003e\u003c/p\u003e\n\u003cp\u003eJust sharing an awesome learning resource I found recently. It will introduce you to the application development cycle leveraging OpenShift\u0026rsquo;s tooling \u0026amp; features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). You will get a brief introduction in several OpenShift features like OpenShift Pipelines, OpenShift GitOps, and OpenShift DevSpaces.\u003c/p\u003e\n\u003cp\u003eCheck out at \u003ca href=\"https://devsecops-workshop.github.io/\"\u003ehttps://devsecops-workshop.github.io/\u003c/a\u003e\u003c/p\u003e","title":"DevSecOps Workshop"},{"content":"UBI stands for Universal Base Image. It\u0026rsquo;s a type of container-based image that Red Hat has created and maintains. UBI images are derived from Red Hat Enterprise Linux (RHEL) and are designed to be a foundation for building containerized applications. Here\u0026rsquo;s why UBI is significant and why you might consider to use it:\nCompatibility with RHEL: UBI is based on RHEL, which means it inherits the reliability, security, and performance of RHEL. This compatibility is crucial for organizations that already rely on RHEL for their enterprise applications. Open and Freely Distributable: Unlike RHEL, which requires a subscription, UBI can be used freely. This means you can build your container images on UBI and redistribute them without worrying about RHEL licensing, while still benefiting from the stability and security of a RHEL base. Security and Compliance: UBI images benefit from Red Hat\u0026rsquo;s commitment to security and compliance. They receive regular updates and patches, which is essential for maintaining security in containerized environments. Broad Ecosystem and Support: Since UBI is based on RHEL, it has broad support from software vendors and the open-source community. This extensive ecosystem ensures compatibility with a wide range of applications and tools. Ease of Certification: For software vendors, using UBI can simplify the process of certifying their applications for RHEL, as UBI containers can be run on both RHEL and non-RHEL hosts. Container Portability: Containers built on UBI can run anywhere that supports container workloads, including Red Hat OpenShift, Kubernetes, and even non-Red Hat platforms. This portability is crucial for organizations adopting a hybrid or multi-cloud strategy. Consistency Across Environments: UBI helps maintain consistency across development, testing, and production environments, reducing the \u0026ldquo;it works on my machine\u0026rdquo; problem. Support for Different Architectures: UBI images are available for multiple architectures, including x86_64, s390x, and others, which is important for organizations with diverse infrastructure needs. In summary, UBI combines the reliability and security of RHEL with the flexibility and freedom of a container-based image that can be freely shared and redistributed. It\u0026rsquo;s an excellent choice for organizations looking to build containerized applications that are secure, compliant, and compatible with a wide range of environments and platforms. See more here\n","permalink":"https://rmmartins.com/2023/12/07/have-you-already-had-a-chance-to-think-about-why-you-should-consider-using-ubi/","summary":"\u003cp\u003eUBI stands for Universal Base Image. It\u0026rsquo;s a type of container-based image that Red Hat has created and maintains. UBI images are derived from Red Hat Enterprise Linux (RHEL) and are designed to be a foundation for building containerized applications. Here\u0026rsquo;s why UBI is significant and why you might consider to use it:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eCompatibility with RHEL\u003c/strong\u003e: UBI is based on RHEL, which means it inherits the reliability, security, and performance of RHEL. This compatibility is crucial for organizations that already rely on RHEL for their enterprise applications.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eOpen and Freely Distributable\u003c/strong\u003e: Unlike RHEL, which requires a subscription, UBI can be used freely. This means you can build your container images on UBI and redistribute them without worrying about RHEL licensing, while still benefiting from the stability and security of a RHEL base.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSecurity and Compliance\u003c/strong\u003e: UBI images benefit from Red Hat\u0026rsquo;s commitment to security and compliance. They receive regular updates and patches, which is essential for maintaining security in containerized environments.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBroad Ecosystem and Support\u003c/strong\u003e: Since UBI is based on RHEL, it has broad support from software vendors and the open-source community. This extensive ecosystem ensures compatibility with a wide range of applications and tools.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eEase of Certification\u003c/strong\u003e: For software vendors, using UBI can simplify the process of certifying their applications for RHEL, as UBI containers can be run on both RHEL and non-RHEL hosts.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eContainer Portability\u003c/strong\u003e: Containers built on UBI can run anywhere that supports container workloads, including Red Hat OpenShift, Kubernetes, and even non-Red Hat platforms. This portability is crucial for organizations adopting a hybrid or multi-cloud strategy.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eConsistency Across Environments\u003c/strong\u003e: UBI helps maintain consistency across development, testing, and production environments, reducing the \u0026ldquo;it works on my machine\u0026rdquo; problem.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSupport for Different Architectures\u003c/strong\u003e: UBI images are available for multiple architectures, including x86_64, s390x, and others, which is important for organizations with diverse infrastructure needs.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIn summary, UBI combines the reliability and security of RHEL with the flexibility and freedom of a container-based image that can be freely shared and redistributed. It\u0026rsquo;s an excellent choice for organizations looking to build containerized applications that are secure, compliant, and compatible with a wide range of environments and platforms. See \u003ca href=\"https://catalog.redhat.com/software/base-images\"\u003emore here\u003c/a\u003e\u003c/p\u003e","title":"Have You Already Had a Chance to Think About Why You Should Consider Using UBI?"},{"content":"This article was originally published at https://cloud.redhat.com/experts/aro/prereq-list/\nBefore deploying an ARO cluster, ensure you meet the following prerequisites:\nSetup Tools Install Azure CLI: Essential for managing Azure resources. Refer to the official documentation Verify Resources Core Quota: Confirm availability of at least 40 cores to create and run an OpenShift Cluster. Permissions RBAC Settings: Ensure you have Contributor and User Access Administrator roles on the cluster resource group. Assign Network Contributor role on the virtual network, if using a separate resource group. For stricter security policies, create a custom role with necessary permissions. Reference link. Microsoft Entra (Former Azure AD): Have a member user of the tenant or a guest with Application administrator role for the tooling to create an application and service principal on your behalf for the cluster. Terraform: If you plan to use Terraform for the deployment of the cluster, see here the required permissions. Azure Integration Resource Provider: Register the Microsoft.RedHatOpenshift resource provider. Reference link. Red Hat Integration: Obtain a Red Hat pull secret (Recommended for access to additional content like Operators and Container Registries). Domain Configuration This step is optional since you can use the built-in domain.\nCustom Domain: Post-cluster creation, configure two DNS A records for the specified domain: api pointing to the API server IP. *.apps pointing to the ingress IP. Retrieve IP addresses using: az aro show -n -g --query '{\u0026quot;api\u0026quot;:apiserverProfile.ip, \u0026quot;ingress\u0026quot;:ingressProfiles[0].ip}'. Access OpenShift console via https://console-openshift-console.apps.example.com (instead of the built-in domain) If using custom DNS, set up a custom CA for your ingress controller and API server. Network Configuration Virtual Network: Create or provide a VNet with two subnets for master and worker nodes. Ensure Pod and Service Network CIDRs do not overlap with other network ranges. Reference link. Outbound Traffic: Default deployment is with outboundType: LoadBalancer, meaning that a Public IP is associated with the Load Balancer for the cluster egress connectivity. To restrict Internet Egress, set --outbound-type to UserDefinedRouting. Consider use a Firewall solution from your choice or Azure native solutions like Azure Firewall or NAT Gateway for enhanced security. Reference link. Cluster Creation Private vs Public Clusters: Private Cluster: This is typically the most suitable option for production use. A Private Cluster makes the cluster API and *.apps endpoints private. Utilize Azure Frontdoor for Internet access to applications on a private cluster. This approach significantly enhances security by keeping the cluster and Azure resources private, managing traffic at the edge, and offering benefits such as Web Application Firewall (WAF), DDoS protection, SSL management, and offloading. For detailed implementation guidance, refer to Azure Frontdoor documentation. Public Cluster: Opt for a Public Cluster only in situations like a \u0026ldquo;sandbox cluster\u0026rdquo; or where establishing a private method for console and API access is not feasible or desired, since the cluster API and *.apps endpoints will be exposed to the Internet. Egress Lockdown: Note that ARO clusters do not require Internet connectivity. Learn about Egress Lockdown. All of the required connections for an ARO cluster are proxied through the service, see the list of endpoints here. Create the Cluster: Proceed to create your ARO cluster once all prerequisites are met. For a detailed step-by-step guide on creating your ARO cluster, refer to the official ARO documentation.\n","permalink":"https://rmmartins.com/2023/11/30/prerequisites-checklist-to-deploy-aro-cluster/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/prereq-list/\"\u003ehttps://cloud.redhat.com/experts/aro/prereq-list/\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eBefore deploying an ARO cluster, ensure you meet the following prerequisites:\u003c/p\u003e\n\u003ch2 id=\"setup-tools\"\u003eSetup Tools\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eInstall Azure CLI\u003c/strong\u003e: Essential for managing Azure resources. Refer to the \u003ca href=\"https://learn.microsoft.com/cli/azure/install-azure-cli\"\u003eofficial documentation\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"verify-resources\"\u003eVerify Resources\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eCore Quota\u003c/strong\u003e: \u003ca href=\"https://learn.microsoft.com/azure/quotas/per-vm-quota-requests\"\u003eConfirm availability of at least 40 cores\u003c/a\u003e to create and run an OpenShift Cluster.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"permissions\"\u003ePermissions\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eRBAC Settings\u003c/strong\u003e:\n\u003cul\u003e\n\u003cli\u003eEnsure you have \u003cstrong\u003eContributor\u003c/strong\u003e and \u003cstrong\u003eUser Access Administrator\u003c/strong\u003e roles on the cluster resource group.\u003c/li\u003e\n\u003cli\u003eAssign \u003cstrong\u003eNetwork Contributor\u003c/strong\u003e role on the virtual network, if using a separate resource group.\u003c/li\u003e\n\u003cli\u003eFor stricter security policies, \u003ca href=\"https://learn.microsoft.com/azure/role-based-access-control/custom-roles\"\u003ecreate a custom role\u003c/a\u003e with necessary permissions. \u003ca href=\"https://docs.openshift.com/container-platform/4.14/installing/installing_azure/installing-azure-account.html#minimum-required-permissions-ipi-azure_installing-azure-account\"\u003eReference link\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMicrosoft Entra (Former Azure AD)\u003c/strong\u003e:\n\u003cul\u003e\n\u003cli\u003eHave a member user of the tenant or a guest with \u003cstrong\u003eApplication administrator\u003c/strong\u003e role for the tooling to create an application and service principal on your behalf for the cluster.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eTerraform\u003c/strong\u003e: If you plan to use Terraform for the deployment of the cluster, \u003ca href=\"https://github.com/rh-mobb/terraform-aro-permissions\"\u003esee here\u003c/a\u003e the required permissions.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"azure-integration\"\u003eAzure Integration\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eResource Provider\u003c/strong\u003e:\n\u003cul\u003e\n\u003cli\u003eRegister the \u003ccode\u003eMicrosoft.RedHatOpenshift\u003c/code\u003e resource provider. \u003ca href=\"https://learn.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider\"\u003eReference link\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eRed Hat Integration\u003c/strong\u003e:\n\u003cul\u003e\n\u003cli\u003eObtain a \u003ca href=\"https://console.redhat.com/openshift/install/azure/aro-provisioned\"\u003eRed Hat pull secret\u003c/a\u003e (Recommended for access to additional content like Operators and Container Registries).\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"domain-configuration\"\u003eDomain Configuration\u003c/h2\u003e\n\u003cp\u003eThis step is optional since you can use the built-in domain.\u003c/p\u003e","title":"Prerequisites Checklist to Deploy ARO Cluster"},{"content":"This article was originally published at Setup a VPN Connection into an ARO Cluster with OpenVPN | Red Hat Cloud Experts\nWhen you configure an Azure Red Hat OpenShift (ARO) cluster with a private only configuration, you will need connectivity to this private network in order to access your cluster. This guide will show you how to configure a point-to-site VPN connection so you won\u0026rsquo;t need to setup and configure Jump Boxes.\nPrerequisites a private ARO Cluster git openssl Create certificates to use for your VPN Connection There are many ways and methods to create certificates for VPN, the guide below is one of the ways that works well. Note, that whatever method you use, make sure it supports \u0026ldquo;X509v3 Extended Key Usage\u0026rdquo;.\nClone OpenVPN/easy-rsa git clone https://github.com/OpenVPN/easy-rsa.git Change to the easyrsa directory cd easy-rsa/easyrsa3 Initialize the PKI ./easyrsa init-pki Edit certificate parameters Copy the sample values file\ncp pki/vars.example pki/vars Uncomment and edit the copied template with your values\nvim pki/vars set_var EASYRSA_REQ_COUNTRY \u0026#34;US\u0026#34; set_var EASYRSA_REQ_PROVINCE \u0026#34;California\u0026#34; set_var EASYRSA_REQ_CITY \u0026#34;San Francisco\u0026#34; set_var EASYRSA_REQ_ORG \u0026#34;Copyleft Certificate Co\u0026#34; set_var EASYRSA_REQ_EMAIL \u0026#34;me@example.net\u0026#34; set_var EASYRSA_REQ_OU \u0026#34;My Organizational Unit\u0026#34; Uncomment (remove the #) the following field\n#set_var EASYRSA_KEY_SIZE 2048 Create the CA: ./easyrsa build-ca nopass Generate the Server Certificate and Key ./easyrsa build-server-full server nopass Generate Diffie-Hellman (DH) parameters ./easyrsa gen-dh Generate client credentials ./easyrsa build-client-full azure nopass Set environment variables for the CA certificate you just created. CACERT=$(openssl x509 -in pki/ca.crt -outform der | base64) Set Environment Variables AROCLUSTER=\u0026lt;cluster name\u0026gt; ARORG=\u0026lt;resource group the cluster is in\u0026gt; UNIQUEID=$RANDOM LOCATION=$(az aro show --name $AROCLUSTER --resource-group $ARORG --query location -o tsv) VNET_NAME=$(az network vnet list -g $ARORG --query \u0026#39;[0].name\u0026#39; -o tsv) GW_NAME=${USER}_${VNET_NAME} GW_SUBNET_PREFIX=e.g. 10.0.7.0/24 # choose a new available subnet in the VNET your cluster is in. VPN_PREFIX=172.18.0.0/24 Create an Azure Virtual Network Gateway Request a public IP Address az network public-ip create \\ -n $USER-pip-$UNIQUEID \\ -g $ARORG \\ --allocation-method Static \\ --sku Standard \\ --zone 1 2 3 pip=$(az network public-ip show -g $ARORG --name $USER-pip-$UNIQUEID --query \u0026#34;ipAddress\u0026#34; -o tsv) Create a Gateway Subnet az network vnet subnet create \\ --vnet-name $VNET_NAME \\ -n GatewaySubnet \\ -g $ARORG \\ --address-prefix $GW_SUBNET_PREFIX Create a virtual network gateway az network vnet-gateway create \\ --name $GW_NAME \\ --location $LOCATION \\ --public-ip-address $USER-pip-$UNIQUEID \\ --resource-group $ARORG \\ --vnet $VNET_NAME \\ --gateway-type Vpn \\ --sku VpnGw3AZ \\ --address-prefixes $VPN_PREFIX \\ --root-cert-data pki/ca.crt \\ --root-cert-name $USER-p2s \\ --vpn-type RouteBased \\ --vpn-gateway-generation Generation2 \\ --client-protocol IkeV2 OpenVPN Go grab a coffee, this takes about 15 – 20 minutes.\nConfigure your OpenVPN Client Retrieve the VPN Settings From the Azure Portal – navigate to your Virtual Network Gateway, point to site configuration, and then click Download VPN Client.\nThis will download a zip file containing the VPN Client.\nCreate a VPN Client Configuration Uncompress the file you downloaded in the previous step and edit the OpenVPN/vpnconfig.ovpn file.\nNote: The next two commands assume you are still in the easyrsa3 directory.\nIn the vpnconfig.ovpn replace the $CLIENTCERTIFICATE line with the entire contents of:\nopenssl x509 -in pki/issued/azure.crt Make sure to copy the \u0026mdash;\u0026ndash;BEGIN CERTIFICATE\u0026mdash;\u0026ndash; and the \u0026mdash;\u0026ndash;END CERTIFICATE\u0026mdash;\u0026ndash; lines.\nAlso replace $PRIVATEKEY line with the output of:\ncat pki/private/azure.key Make sure to copy the \u0026mdash;\u0026ndash;BEGIN PRIVATE KEY\u0026mdash;\u0026ndash; and the \u0026mdash;\u0026ndash;END PRIVATE KEY\u0026mdash;\u0026ndash; lines.\nAdd the new OpenVPN configuration file to your OpenVPN client. Mac users – just double click on the vpnserver.ovpn file and it will be automatically imported.\nConnect your VPN. ","permalink":"https://rmmartins.com/2023/03/29/setup-a-vpn-connection-into-an-aro-cluster-with-openvpn/","summary":"\u003cp\u003e\u003cem\u003eThis article was originally published at \u003ca href=\"https://cloud.redhat.com/experts/aro/vpn/\"\u003eSetup a VPN Connection into an ARO Cluster with OpenVPN | Red Hat Cloud Experts\u003c/a\u003e\u003c/em\u003e\u003c/p\u003e\n\u003cp\u003eWhen you configure an Azure Red Hat OpenShift (ARO) cluster with a private only configuration, you will need connectivity to this private network in order to access your cluster. This guide will show you how to configure a point-to-site VPN connection so you won\u0026rsquo;t need to setup and configure Jump Boxes.\u003c/p\u003e","title":"Setup a VPN Connection into an ARO Cluster with OpenVPN"},{"content":"Hey, I\u0026rsquo;m Ricardo Martins — welcome to my memory dump.\nThis blog is where I offload the stuff I don\u0026rsquo;t want to forget. I\u0026rsquo;ve lost count of how many times I spent hours learning something cool, only to forget it three months later. Writing helps me remember — and if it helps someone else along the way, even better.\nInspired by the article \u0026ldquo;Do things, write about it,\u0026rdquo; this has been my quiet little corner of the web since 2007 — mostly in Brazilian Portuguese in this blog, though I\u0026rsquo;m branching out more into English.\nWho is Ricardo? Born in Niterói, Brazil, on December 31, 1984.\nI\u0026rsquo;m a family-first kind of guy — lucky husband, proud father of three. Reserved, a little conventional, definitely organized. I like things that make sense and people who keep it real.\nI work best with purpose, structure, and persistence — but I\u0026rsquo;m also curious, calm under pressure, and never afraid to admit what I don\u0026rsquo;t know. Challenges fuel me more than money ever could. If something doesn\u0026rsquo;t spark emotion, I won\u0026rsquo;t last long doing it.\nWhat do I do? I\u0026rsquo;m an IT professional with 20+ years of experience, working across infrastructure, DevOps, and cloud computing. My focus areas include:\nCustomer onboarding \u0026amp; content development Cloud Architecture \u0026amp; Operations Kubernetes and AKS Infrastructure as Code DevOps culture \u0026amp; automation I made the leap from traditional sysadmin work to cloud and DevOps in 2012, and it completely changed how I think. I didn\u0026rsquo;t come from a coding background — so learning IaC, pipelines, and reliability engineering was a major shift. But I fell in love with the culture behind it: collaboration, feedback loops, autonomy.\nOver time, I realized that the real magic happens when you bridge the gap between infrastructure and development. It\u0026rsquo;s not about writing more code — it\u0026rsquo;s about understanding developer needs and building systems that are simple, autonomous, and just work.\nMy north star: Make life easier for developers, operators, and customers alike. Build systems that are simple, efficient, and don\u0026rsquo;t break at 2AM.\nCareer overview (Highlights) Principal Cloud Solution Architect, Microsoft (Apr 2024 – Present) Senior OpenShift Black Belt, Red Hat (2023–2024) Azure FastTrack Engineer, Microsoft (2019–2023) Azure Technical Trainer, Microsoft US (2019) Cloud Solution Architect, Microsoft Brazil (2015–2019) Systems Engineer \u0026amp; Consultant (2003–2015) From Brazil to Redmond to Florida. Lots of moves, lots of growth, and tons of stories in between.\nEducation I don\u0026rsquo;t have a traditional academic path. I started and stopped college more times than I can count and finally crossed the finish line years later — but I\u0026rsquo;ve never stopped learning. Here\u0026rsquo;s the summary:\nIncomplete Master\u0026rsquo;s degree in Electronic Engineering/Computer Networks and Distributed Systems (Universidade Estadual do Rio de Janeiro) Associate Degree in Computer Networking (Senac/RJ) Technical Degree in Electronics from (Escola Técnica Estadual Henrique Lage) Active certifications Microsoft Azure Network Engineer Associate Microsoft AI Fundamentals Red Hat Certified OpenShift Administrator Red Hat Certified Specialist in Containers AWS Certified Cloud Practitioner Kubernetes and Cloud Native Associate Linux Foundation Certified Systems Administrator Azure Solutions Architect Expert Azure DevOps Engineer Expert Azure Security Engineer Associate Azure Administrator Associate Azure Fundamentals List of other certifications on Credly Fun facts \u0026amp; Turning points 2002 — At 17, I took my first course on computer repair. That was my first real step into the world of IT.\n2003 — Landed my first job as a technical intern at a small wireless ISP. It wasn\u0026rsquo;t glamorous, but it was where I learned how real-world tech works.\n2004 — Took an official Conectiva Linux course and earned my first professional certification. I was hooked.\n2005 — Enrolled in Computer Science (Universidade Plínio Leite). Dropped out before the semester ended. Life moved fast, and college didn\u0026rsquo;t always keep up.\n2005–2011 — Spent this period working shifts — weekends, holidays, nights — and juggling multiple jobs. I was also bouncing between college degrees and real-world experience.\n2007 — Got married at 22. That same year, enrolled in an Associate Degree in Computer Networking (Universidade Estácio de Sá)… and dropped out again. Timing wasn\u0026rsquo;t right.\n2009–2011 — Juggled up to three jobs: a full-time contract role, part-time IT consulting, and technical training for Microsoft on Saturdays. Hustle mode.\n2010 — Tried again with a Computer Engineering (Universidade Veiga de Almeida). Gave it my best, but had to leave that one behind too.\n2011 — Became a father for the first time. That changed everything. I made a promise to finish at least one degree — and finally did, earning a BTech in Computer Networking (Senac/RJ).\n2012 — Got into Cloud Computing at a startup. Also experienced my first layoff — painful, but necessary growth.\n2013 — Became a dad again and finally graduated from college after eight years of starts and stops. That same year, picked up a remote weekend gig for a hosting company in New Zealand.\n2015 — Landed my first role at a multinational company. Felt like a milestone moment.\n2016 — Took my first international trip for a tech training event. My English was rough, my nerves worse — but I showed up.\n2018 — Third child, third big perspective shift.\n2019 — Moved to Redmond, WA with my family via internal transfer. Became a technical trainer in the US — still improving my English on the fly.\n2020 — After 11 months in Washington, we moved to Florida. We\u0026rsquo;ve called Winter Garden home ever since.\nThe thread through it all?\nI\u0026rsquo;ve failed, pivoted, stretched, and kept going. I\u0026rsquo;ve worn many hats — trainer, engineer, consultant, mentor. I\u0026rsquo;ve juggled too much, spoken broken English in big rooms, and learned most things by messing up first.\nI\u0026rsquo;m not here because it was easy. I\u0026rsquo;m here because I didn\u0026rsquo;t stop. I didn\u0026rsquo;t have a straight path — but I never stopped moving.\nThe human side I\u0026rsquo;m not just what I do. I\u0026rsquo;m someone who…\nsings badly but sings anyway, doesn\u0026rsquo;t like soccer, but loves silence and thunderstorms, gets nostalgic about old cartoons and handwritten notes, still sends \u0026ldquo;I love you\u0026rdquo; messages out of nowhere, is clumsy enough to spill juice but careful enough to keep learning, believes in God and the power of kindness, dreams big but appreciates the small stuff. calm down watching sunsets. overthink. forget things. stay up too late reading or writing. believe we\u0026rsquo;re all here to make something better — even if it\u0026rsquo;s just someone\u0026rsquo;s day. I don\u0026rsquo;t always know where I\u0026rsquo;m going — but I show up, stay curious, and keep trying to do the next right thing.\nFinal Thought At the end of this life, no one will care how many titles we had or what car we drove.\nWe\u0026rsquo;ll be remembered for how we loved. How we showed up. How we made others feel.\nOne day, I believe we\u0026rsquo;ll be asked not what we achieved, but who we became.\nWere we kind? Were we honest? Did we bring out the best in those around us?\nDid we make life a little better?\nThat\u0026rsquo;s the life I\u0026rsquo;m building — one project, one day, one memory at a time.\n","permalink":"https://rmmartins.com/about/","summary":"About Ricardo Martins","title":"About"},{"content":"A curated list of my open-source projects, ebooks, hackathons, and tools — mostly focused on Linux, Kubernetes, Azure, AI, and DevOps.\nLearning Ecosystem A progressive learning path for infrastructure professionals:\n# Project Description 1 Linux Hackathon 20 hands-on challenges covering Linux fundamentals. Part of Microsoft\u0026rsquo;s \u0026ldquo;What The Hack\u0026rdquo; program 2 From Server to Cluster Kubernetes ebook for Linux professionals. 15 chapters bridging Linux skills → K8s 3 K8s Hackathon 20 hands-on Kubernetes challenges covering 100% of CKA + CKAD + CKS certification domains 4 AI for Infra Pros Practical AI handbook for infrastructure engineers. 15 chapters, 220+ pages, hands-on labs Ebooks \u0026amp; Guides Project Description Azure Governance Made Simple 30 chapters on identity, policy, IaC, cost, observability, and governance at scale Startup-Scale Landing Zone Opinionated landing zone for startups on Azure. Deploy in under 1 hour (Bicep + Terraform) Azure Digital Natives Guide Complete checklist for startups and digital-native teams on Azure Tools \u0026amp; Feeds Project Description PTU Calculator PTU estimator for Azure OpenAI. Compares PAYGO, PTU, and hybrid pricing models AKS Newsletter Monthly curated updates on Azure Kubernetes Service Azure Feed Daily aggregator of Azure blog updates Study Guides Guide Link AKS Learning Path aks-learning.github.io Azure Fundamentals Study Guide azure-fundamentals.com Azure Readiness aka.ms/azreadiness Azure Certification Guide aka.ms/azcertification All repositories on my GitHub\n","permalink":"https://rmmartins.com/projects/","summary":"Open-source projects, ebooks, hackathons, and tools","title":"Projects"}]