AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
8h ago · 9 min read · 10 min read · April 2026 · Python · AWS · Bedrock We're at the Microservices Moment for AI The Landscape In the early days of cloud architecture, teams built monolithic applications and eventually lea
Join discussion4h ago · 3 min read · If you've ever tried to run some serious AI or machine learning workloads on Kubernetes, you know networking can be a real pain point. Moving massive datasets around, especially when you're dealing wi
Join discussion
41m ago · 10 min read · AI-First Companies: How KI-Native Firms Are Dismantling Traditional Industries By Dirk Roethig | Freelance Journalist & Environmental Consultant | 07. March 2026 Cursor hit $1.2 billion in annual recurring revenue in 2025. Harvey, a two-year-old lega...
Join discussion1h ago · 14 min read · Photo: Vincent Tjeng Jeff Dean presented the Pathways vision back in 2021: train a single large model that can do millions of things. At the time, ChatGPT didn't exist yet, and this idea felt genuinel
Join discussion
3h ago · 4 min read · RisingWave Agent Skills is an open-source toolkit that teaches AI coding agents how to correctly build stream processing pipelines with RisingWave. It ships two skills -- a core reference and a 14-rule best practices guide -- covering the Source to M...
Join discussion
4h ago · 6 min read · Are 'installation' and 'deployment' simply two words for the same process? While often used interchangeably in casual conversation, these terms represent fundamentally distinct stages in the software
Join discussion
Building, What Matters....
2 posts this monthAPEX, ORDS & the Oracle Database
1 post this monthObsessed with crafting software.
4 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthBuilding, What Matters....
2 posts this monthAPEX, ORDS & the Oracle Database
1 post this monthObsessed with crafting software.
4 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthMost are still shipping “AI add-ons.” The real shift happens when the whole workflow disappears into one action — that’s when users actually feel the value.
The OWASP LLM risks become even more critical when you consider that AI coding agents now have shell access and can modify files directly. Prompt injection isn't just a chatbot problem anymore — it's a supply chain risk when an agent reads untrusted input (like a GitHub issue body) and executes code based on it. Two practical mitigations I've found effective: 1) Sandboxing agent execution so it can't access credentials or production systems, and 2) Using pre-commit hooks that scan for common patterns like hardcoded secrets or suspicious shell commands in AI-generated code. Claude Code's hook system supports this natively, which helps enforce security gates in the CI pipeline automatically.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
This really clicked for me. It kind of reminds me of how backend systems moved away from one big service into smaller pieces that each do one thing well. Also the idea that context is something you have to manage instead of just keep adding to it… that changes how you approach the whole thing. Feels like the hard part isn’t prompting anymore, but how you structure everything around it.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Great breakdown of a decision most teams get wrong by defaulting to whatever's trending. The key insight people miss: BFF isn't an alternative to API Gateway — they solve different problems at different layers. API Gateway handles cross-cutting concerns (auth, rate limiting, routing) while BFF handles client-specific data shaping. You can absolutely run both. Where GraphQL fits depends on your team's query complexity — if your frontend needs to fetch deeply nested, variable-shape data across multiple domains, GraphQL shines. But if you're mostly doing CRUD with predictable payloads, a BFF with REST is simpler to cache, easier to debug, and doesn't require the schema stitching overhead. The real question should be: how many distinct clients are consuming your API? One client = REST is fine. Three+ clients with wildly different data needs = that's where BFF or GraphQL earns its complexity budget.
We're building an AI agent orchestration platform using Claude (Coworker) for code generation paired with local builds and iteration. Our current workflow: Feature planning in Claude (conversational)
The PR workflow with AI-assisted dev needs one critical addition: prompt versioning. When your system prompts affect code generation quality...
¿Qué ventajas tiene usar aplicaciones web (como Google Workspace) frente a los programas tradicionales instalados en la computadora?