https://blauwestadtechnologieen.blob.core.windows.net/brand-assets/blue-city-capital-technologies-default.png

Talk with us

If you would like to make an appointment with our sales team, please use the help widget at the bottom of this page.

Why is this relevant?

Browsing a website from your actual location rather than through a VPN (Virtual Private Network) is important for several reasons, primarily revolving around the accuracy of information, compliance with legal requirements, and ensuring appropriate access.

Accurate Content and Services:

Websites often provide content and services tailored to specific geographic regions. This could include localized news, pricing, availability of products/services, or language-specific content. Accessing a website through a VPN may misrepresent your location, leading to inaccurate or irrelevant content being displayed. For example, you might see prices in a currency that doesn't match your actual location, or services that are not available in your region may appear accessible.

Legal and Regulatory Compliance:

Many websites and online services are subject to legal and regulatory requirements specific to certain regions. For instance, data privacy laws such as GDPR in Europe or CCPA in California impose restrictions on how personal data is collected, stored, and processed based on the user's location. Accessing a website through a VPN might mask your actual location, potentially leading to non-compliance with these regulations if you're interacting with content or services not meant for your actual geographic area.

Security Concerns:

VPNs are often used to enhance online security and privacy by encrypting internet traffic and masking IP addresses. However, some websites might restrict or block access from VPN IP addresses due to concerns over abuse or malicious activities. Attempting to access such websites through a VPN could result in denied access or additional security challenges.

Quality of Service:

Some websites or online services might adjust their performance or service levels based on the user's geographical location. This could affect factors like server response times, network latency, or overall service quality. Using a VPN might inadvertently route your traffic through servers that are not optimized for the best performance, impacting your user experience.

Ethical Considerations:

From an ethical standpoint, accessing content or services through a VPN to misrepresent your location might violate terms of service agreements or undermine the principles of fair use and access intended by content providers.

Blog

/

From Legacy Pain to Modern Gain: Automating Deployments and Disaster Recovery

keyboard_backspace Return to Blogs

From Legacy Pain to Modern Gain: Automating Deployments and Disaster Recovery

Dec. 4, 2025
  |  
7 min read

As our operations expanded and we integrated additional backend systems and product lines, it became evident that our existing approach to software version management and disaster recovery could no longer keep pace. The growing complexity of our infrastructure, combined with the speed of deployments, revealed critical gaps in consistency, traceability, and overall resilience.

To address this, we undertook a comprehensive overhaul of our processes. We streamlined version control across environments, implemented automated rollback mechanisms, and established robust disaster recovery protocols tailored to each critical service. This ensured not only faster recovery times but also greater confidence in our ability to maintain continuity during unexpected disruptions.

Here’s how we approached it, and what we learned along the way.

As we expanded the business, we felt we needed a more robust, packageable & repeatable process to get our products, processes and systems to the places they needed to be. As strong advocate of remote working, or work from home (WFH), brought about by the global pandemic back in 2020, it highlighted a previously-hidden issue: the need to automate processes, systems and information in a repeatable and timely way.

The proved to be difficult, we had many competing stakeholders interests and each needed to be reassured that not only is money a resource you cannot get back once you have spent it: there's also time.

When we realised that having to go into each and every server and manually type out a load of commands to either push or pull our latest code from our on-site repositories to our cloud solutions, we felt there needed to be a better way of doing this. We also quickly realized that if we were going to automate our processes, it soon became obvious that when things went wrong, it wasn't immediately always known without hours of troubleshooting.

CEO Todd Gilbey said "Not only is it important to have the optimum UX for our external users, but it's also important for our internal employees and contractors that we have systems and processes that they actually enjoy working with. Nothing kills employee morale faster than having to use cumbersome, complicated, manual systems that should, in the 21st Century, be automated and at relatively little outlay compared to circa 20 years ago. We want our employees to actually enjoy using our systems & technology as part of their roles."

He recalled his time as a check-in agent with KLM Airlines in the late 2000s, working with the airline’s legacy system known as CODECO/HUGO. The interface - or lack thereof - was a stark reminder of how technology can hinder rather than help. There was no graphical user interface (GUI), just a monochrome blue screen where agents had to manually input raw code to register passenger details, baggage count, and weights.

"The system was unforgiving", he goes on to say. "A single mistyped character could trigger cascading errors, potentially delaying flights for hours. I witnessed first-hand how this complexity took a toll on staff, there were many occasions where my colleagues were reduced to tears, overwhelmed by the pressure and the brittle nature of the tools we were forced to use".

"It was a powerful lesson in the importance of intuitive design, error tolerance, and human-centered systems, one that continues to inform my approach to building resilient, user-friendly workflows today."

As our platform matured, we recognised the need for a robust reporting mechanism to ensure that errors and issues were identified and resolved without delay. Manual oversight wasn’t scalable, and we needed a system that could provide real-time visibility and accountability.

To address this, we implemented a support ticketing solution with full audit trail capabilities, allowing our engineers to track, triage, and resolve issues with complete transparency. This is where Freshdesk, Inc. came in. We onboarded them as a sub-processor to manage our support ticketing infrastructure, fully integrated into our website as well as into our trading systems & back office systems.

Users can raise issues directly via our contact form, but we’ve gone a step further: when critical errors occur, such as HTTP 500 failures, we will automatically generate a support ticket. This eliminates the need for users to navigate complex menu trees or guess which department to contact. Instead, every issue is routed instantly to the right team, with all the context needed to act fast.

This integration not only streamlines our support operations but also reinforces our commitment to proactive issue resolution and user experience.

The user can optionally provide their email address, so that we can start a two-way communication, before closing the ticket once the user is happy with the resolution. This was of course in line with GDPR and was followed by signposting the user in how to make a DSAR (Data Subject Access Request) in the future.

While version control posed its own challenges, the real friction point emerged during initial deployments. Although we had automated the process of deploying systems onto new local servers, the first-time setup remained a critical gap one that demanded a more resilient, hands-off approach. This way, employees did not necessarily have to be familiar with git commands, git in itself being a very unforgiving system if you are unfamiliar with such a system - bringing even more of a need to ensure a robust reporting system.

To close this gap, we developed a fully automated bootstrap process that handled:

- Module installation: Ensuring all required dependencies were installed in the correct directories.

- Directory validation: Verifying the presence and integrity of key folders, and creating them if missing.

- Version tracking: Generating a version reference file for quick lookup and maintaining a .gitignore to prevent unnecessary clutter in version control.

Each step was fortified with error detection and reporting. If any inconsistencies were found, such as a missing module, an invalid file path, or a potential git pull conflict, the script would halt execution safely, preventing downstream issues and preserving system integrity.

This approach not only streamlined software release deployments but also reduced the risk of human error, fatigue and complacency, making our infrastructure more predictable and scalable from the outset.

Whilst we've come a long way in a year, there are still major features we intend to implement to enhance not only the users experience, but our employee's experience as well. We will be linking our accounting & financial reporting systems directly to the investor relations section of our website, reducing the need to manually intervention and errors.

As automation has helped streamline initial deployments, one of the ongoing challenges lies in scaling these processes across increasingly diverse environments. Different operating systems, container platforms, and hybrid cloud setups introduce new layers of complexity, and the automation scripts that worked well in one context may not translate seamlessly into another. To remain resilient, organizations will need continuous validation pipelines that adapt to evolving infrastructure, ensuring automation remains portable and reliable.

Another challenge is maintaining traceability as deployments accelerate. The faster code and configurations move into production, the harder it becomes to guarantee auditability of every change. This calls for stronger integration between version control and observability tools, where GitOps pipelines are tied directly to monitoring dashboards. Such integration would provide real-time visibility into what changed, when, and why, reinforcing accountability across the system.

Balancing user experience with security is also a pressing concern. Automating support ticket creation and error reporting improves usability, but it raises the risk of data leakage or false positives if error payloads contain sensitive information.

Future systems will need privacy-aware logging and anomaly detection to ensure transparency without compromising compliance with regulations such as GDPR, SOC2, or ISO27001.

Stakeholder alignment remains a cultural and organizational hurdle, especially in a distributed workforce. Remote-first operations mean stakeholders are globally dispersed, each with different priorities. Aligning them around automation, resilience, and user experience requires stronger governance frameworks and communication rituals. Automated reporting dashboards and shared objectives can help keep technical and business teams moving in the same direction.
Resilience against emerging threats is another area that demands attention. Disaster recovery protocols must evolve to handle ransomware, supply chain attacks, and large-scale cloud outages. Multi-region failover, immutable backups, and chaos engineering practices will be essential to continuously test and strengthen resilience under real-world failure scenarios.

Even with automation, employee morale can suffer if systems are opaque or overly rigid. Investing in developer experience through intuitive dashboards, self-service tooling, and error-tolerant workflows will be critical to ensure that employees feel empowered rather than burdened by the technology they use every day.

Finally, sustainability of processes is a long-term challenge. Automation scripts, ticketing integrations, and bootstrap mechanisms risk becoming legacy systems themselves if not actively maintained. Treating automation as a living product, complete with versioning, documentation, and continuous improvement cycles, will prevent technical debt and ensure these systems remain effective as the organization grows.

Taken together, these challenges highlight that the future is not just about technical improvements but also cultural and systemic evolution. The next frontier is ensuring that automation scales gracefully, remains secure, and continues to serve both employees and customers in a way that is transparent, intuitive, and resilient against the unknown.

python
api
yaml
rest api
mql5
deployment
trackability

By attending - and/or in any way contributing to - any of our blogs, events or podcasts, you agree to our adhere to our Community Guidelines and that you acknowledge that we reserve the right to remove any comment which we may either deem offensive, unlawful or otherwise goes against our Community Guidelines and our Terms of Use