Day two of SHARE Cleveland delivered another round of standout sessions, blending innovation with real-world lessons. From artificial intelligence-driven ransomware simulations to the pros and cons of “vibe coding,” SHARE’d Intelligence is here with your August 19, 2025, recap. If a topic sparks your interest, let us or the presenters know — it might just become a future deep dive!
Mainframe Ransomware Attack That Worked
We started the morning strong, attending Vertali Technical Director Mark Wilson’s standing-room only session, where he proved just how easy it can be to hack into the mainframe with artificial intelligence (AI). Wilson shared his previously conducted ransomware testing scenario, where he built custom tools using AI assistance and launched a simulated attack. The entire process took just 90 minutes to prepare, and once executed, the attack crippled the system in under eight minutes.
He then demonstrated elements of the attack live, simulating a rogue system programmer targeting specific datasets. Using ChatGPT, Wilson was able to generate assembler code with minimal prompting. While ChatGPT initially resisted, it eventually provided working examples that required little modification. The tools — built using assembler, utilities, and applications like all-in-one encryption application ZenCrypt — were effective and disturbingly simple to deploy.
Key takeaways from the session:
- AI lowers the barrier to entry: With the right prompts, even complex attack code can be generated quickly and accurately.
- Speed is the real threat: Once the tools are in place, an attack can unfold in minutes — leaving little time for detection or response.
- Monitoring isn’t foolproof: Wilson challenged the assumption that existing monitoring tools are able to catch this kind of activity. Would anyone notice? And if they did, how fast could they respond?
- Recovery is complex and situational: Whether surgical or catastrophic, recovery depends on having full-volume safeguarded copies and a multi-tiered plan. Even a few hours can be devastating.
- Insider threats are real: The attack scenario was based on a privileged user going rogue — something every organization must prepare for.
Audience commentary added depth:
- One attendee asked whether file integrity monitoring could mitigate the risk. Wilson agreed it might detect anomalies but emphasized that detection alone isn’t enough. Systems need to respond automatically and decisively.
- Another asked whether an attacker could disable monitoring tools first. Wilson confirmed that it would be his first move — and a realistic one.
Attackers don’t need deep technical expertise; they just need the right prompts. And defenders need to think like attackers to stay ahead.
Building Open-Source Solutions in a Closed Community
Next, we headed to two “flash sessions” offering key insights in a speedier format, beginning with John Gontaryk’s presentation, which explored how open source could help address some of the mainframe’s most pressing challenges. Gontaryk, AI software engineer at IBM, laid out what he sees as core issues facing the mainframe today and how open source can offer meaningful solutions.
One of the key challenges he highlighted is the steep learning curve for new system programmers. It can take anywhere from two to seven years for someone to become fully productive, yet the median tenure for younger employees is a little less than three years. With mainframe departments shrinking and fewer midcareer professionals available, this mismatch creates a costly gap in continuity and expertise.
To address this, Gontaryk advocated for making the mainframe more accessible through modern technologies — tools like Python, JavaScript, and automation that can shorten onboarding time and align with newer hires’ existing skillsets. He positioned open source as a natural extension of this modernization effort.
From his perspective, open source offers several advantages:
- It enables collaboration across organizations, allowing developers to build shared solutions rather than starting from scratch.
- It lowers the barrier to entry, allowing developers to make updates and submit pull requests without needing deep institutional knowledge.
- It can be more secure than proprietary code, thanks to timely patches and community-driven audits. Security should come from design, not secrecy, according to Gontaryk.
Gontaryk closed by encouraging attendees to get involved — whether by requesting features, reporting bugs, or contributing code. He believes open source isn’t just compatible with the mainframe; it’s a strategic opportunity to modernize, collaborate, and build a more sustainable future.
VIBE Coding: The Good, the Bad, and the Ugly
Our second flash session tackled the newly trending concept of “vibe coding,” often described as coding with AI — sometimes even without deep expertise. BMC Software’s Anthony DiStauro, distinguished engineer, and Rebecca Parchman, R&D solutions architect, explored the promise and pitfalls of this growing phenomenon, sparking lively discussion from the audience.
The Good: AI reduces friction, accelerates prototyping, and allows developers to spend more time on creativity, problem-solving, and business outcomes. DiStauro highlighted the energy and enthusiasm surrounding the trend, particularly among younger developers. Audience members noted how AI is improving onboarding and education, though some wondered whether it might also slow deeper learning.
The Bad: AI-generated code isn’t always reliable. Inconsistent output, hallucinations, and subtle errors can burden reviewers and increase the risk of technical debt. Parchman emphasized quality assurance concerns, especially when junior developers copy and paste code without fully understanding it.
The Ugly: Overreliance on AI could erode skills, expand intellectual property risks when leveraging third-party tools, and create potential blind spots if developers lean on AI to complete sprint work without engaging with the logic. Several attendees shared mitigation strategies, such as requiring new developers to build internal projects and demonstrate their understanding to senior staff.
Emerging Best Practices: Suggestions included documenting AI-generated code in comments, limiting its use in production (vs. prototypes), and training engineers in effective prompt engineering. DiStauro offered a broader perspective: programming has always been about abstraction. In the early days, understanding hardware was essential; over time, that layer was abstracted away. AI may be the next layer — where success depends less on writing every line and more on knowing how to prompt, set guardrails, and debug effectively. Looking further ahead, he even speculated that future AI systems might produce code so advanced that human-readable languages could become optional.
DiStauro reassured attendees that their jobs remain safe — provided AI is used wisely. When the hype settles, it’ll find its place in the tech stack. Rely on it, but don’t depend on it.

Caption: Live band at SHARE Cleveland
What Does Transforming the Mainframe Really Mean?
After lunch (which featured a live rock band — including a performance by SHARE member Greg Lotko, senior vice president and general manager for Broadcom’s Mainframe Software Division), we dove into an afternoon packed with technical sessions and workshops.
One standout was a panel that featured Anthony (Tony) Anter, DevOps architect & evangelist at BMC (SHARE’s Editorial Advisory Committee chair), Michael Davis, senior programmer analyst at Edward Jones, and Mark Schettenhelm, principal product manager at BMC. Moderator and BMC Director of Value and Solutions Engineering Tim Ceradsky asked the panelists a key question: “What does it really mean to modernize the mainframe?”
Spoiler — it’s not just about tools. It’s about people, culture, and strategy.
Why Transform?
Each speaker shared their journey from legacy systems and steep learning curves to embracing DevOps and hybrid architectures. Their consensus is that the mainframe isn’t going away, but it must evolve. Transformation helps attract new talent, improve agility, and integrate with modern workflows.
Culture First, Then Code
Change management was a recurring theme. Forcing new tools on seasoned developers doesn’t work. Instead, involve them early, show how their lives improve, and find teams eager to lead by example. Anter suggested not beginning with the most resistant team members.
Dev Meets Ops
DevOps isn’t just about speeding up pipelines. It’s about smooth handoffs to operations, feedback loops, and building strong site reliability engineering (SRE) practices. Automation helps, but culture and communication are key. (Learn more about SRE roles in this SHARE’d Intelligence article).
Metrics That Matter
Forget vanity metrics. The panel pointed to the four key DevOps metrics identified by DevOps Research and Assessment (DORA), a research group: deployment frequency, lead time for changes, mean time to recovery, and change failure rate. These key performance indicators (KPIs) are widely used to measure software delivery performance and help teams demonstrate progress and return on investment (ROI).
Tomorrow brings the final installment of our recap series, but the learning and networking continue through Thursday, August 21, 2025. If we missed your favorite session, let us know — we’d love to hear your highlights!
Want more technical education? Become a member today!