Meet the team behind a complex data migration

30 Apr, 2026 | 5 minutes read

Every project has a status report. Fewer have the honest version, the one that lives in the decisions made before the documentation existed, in the late nights that never made it onto a Jira board, and in the quiet satisfaction of watching something complex finally hold in production.

We asked the team behind this data migration to share theirs. Here’s what they said.

What’s a decision you made early that you were grateful for later? One decision that proved especially valuable was prioritizing a flexible integration design from the very beginning. By setting up a strong foundation, we were able to adapt quickly as requirements evolved and maintain consistent performance as the project grew. This forward-thinking approach helped us deliver reliably while keeping rework to a minimum.

What’s the one thing about this project that would only make sense to another Microsoft specialist? What might look like straightforward data integration actually involves a lot of thoughtful decision-making behind the scenes, especially when working within the Microsoft ecosystem. Choosing the right combination of Azure services and making sure they work well together requires careful planning to achieve reliability, efficiency, and long-term sustainability.

Simona MatrakoskaSr. Technical Consultant | Microsoft
Valentina Petrovska
Intermediate Technical Consultant | Project and Delivery Management Team

What’s your tell that a project is actually on track, beyond what’s in the status report? Beyond the status report, we look at whether the migrated data is accurate, complete, and usable for the business. We also track how each migration cycle is progressing: are issues decreasing, is rework minimal, is delivery becoming more stable and predictable? Stakeholder confidence matters too. If the business can trust and use the data, that’s the clearest sign the project is truly on track.

What’s one thing this team did that you wish every team would do? This team took full ownership, even in a challenging setup with no initial documentation and limited stakeholder support. Despite that, everyone invested significant extra effort to analyze, design, and deliver complex integrations independently. Their dedication, persistence, and problem-solving mindset made it possible to deliver what initially seemed very difficult. That level of commitment is something I wish every team would bring.

What’s something you built that looks simple but definitely wasn’t? The part that retrieves events from Salesforce looked straightforward on the surface, but it turned into a genuinely complex challenge because it had to be implemented consistently across completely different technologies. Aligning the approach, handling differences between systems, and making everything work reliably took much more effort than expected.

When you’re stuck, what AI tool helps you first? GitHub Copilot.

Aleksandra Nikleska
Sr. Technical Consultant | Microsoft
Elena Perijoska
Intermediate Technical Consultant | Microsoft

What’s one thing about working with AI that you couldn’t have predicted until you actually did it? One thing that surprised me is how non-deterministic AI is compared to traditional development. I expected it to behave like a typical API, same input, same output, but in reality, small prompt changes can lead to very different results. So instead of just integrating it, you spend time refining prompts, adding validation, and building guardrails around it. It feels less like strict programming and more like guiding the system toward consistent outcomes.

What’s the gap between how people imagine AI works and how it actually works? People often expect AI to understand requirements and generate solid, production-ready code. In reality, it’s predicting patterns, not truly understanding the system. I’ve seen this when using AI to generate .NET code for Azure Functions or integrations. It can give you something that looks correct, with proper structure and even best practices, but once you plug it into a real workflow, you hit issues: missing edge-case handling, incorrect assumptions about configuration, subtle bugs. You still need to validate everything, adjust it to your actual architecture, and test it properly. The gap is between code that looks right and code that actually works in a real environment.

What’s something the rest of the team built that you only truly appreciated once it was in production? The Workday go-live is a great example. During development and testing, you see the pieces, but you don’t fully feel the impact. Once it went into production, I really appreciated how much thought went into the integrations, error handling, and monitoring.

What’s the first thing you check when something seems off? The first thing I check is when the issue started. Pinpointing the start time immediately helps narrow the search: What changed around that moment? Deployments, config updates, scheduled jobs? Do the logs show errors beginning at the same time? Did any alerts or metrics spike exactly then? By anchoring the investigation to a specific timeframe, I can quickly correlate events across systems and avoid digging blindly through hours of data. In operational support, knowing when something broke is often half the solution.

Janku Popovski
Sr. Technical Consultant | Support

What’s a piece of infrastructure you’d describe as “boring but essential”? Monitoring and logging. It’s not something people get excited about, but it’s absolutely critical. When everything is running smoothly, it stays in the background. The moment something goes wrong, it becomes the most important tool you have.

What’s the quiet signal that makes you stop and investigate? Small changes in behavior, like slightly slower response times or a gradual increase in retries or failures. These aren’t always urgent alerts, but catching them early usually prevents much bigger issues later on.

What’s the most satisfying moment, when everything finally connected? When our most complex workflow finally went live. It handled employee data flowing through multiple steps, with a lot of business logic and edge cases to manage. After weeks of debugging tricky issues, seeing it run in production with zero failures gave me that real “everything just works” feeling. It made all the long days worth it, and I’m really grateful to the team for pushing through it together.

What’s one thing that looked simple on the diagram but wasn’t? My part was actually quite clear once I got into it. But watching the broader migration happen around me was eye-opening. What looked like “just move the workflows from SnapLogic to Azure” turned out to involve a huge amount of groundwork: VNet setup, SAP connections, SFTP configurations, on-prem gateways. Things that don’t show up as boxes on a diagram at all. It gave me a lot of respect for the infrastructure side of a migration. The visible work is only part of the story.

Kristijan Celkoski
Technical Consultant | Integration
Stefan Stojanovski
Sr. QA Consultant | QA




What’s your instinct when something works perfectly on the first try? Cautious optimism. It’s great when something works immediately, but as a QA engineer, I want to prove it’s consistently correct, not just lucky once.

What’s one thing you tested that you were sure would break, and it held? A memorable test was validating a flow where the system had to take manual currency rates and reliably produce both directions in the final output. It felt like the kind of case where even a small mismatch could appear, so seeing it work cleanly end to end was a great moment.