Skip to content

5.3: Feature and Functionality Migration/Replication

✂️ Tl;dr 🥷

This phase transitions legacy eMap services, applications and workflows to the new eMap platform, prioritising re-architecture over suboptimal migration. Components are redesigned to leverage Azure PaaS capabilities, cloud optimised formats and automated deployment while adhering to data governance and Zero Trust security principles. Services are republished from authoritative data stores, applications updated or rebuilt with modern frameworks, and workflows adapted using Azure native tools. Rigorous testing, including side by side comparisons, performance checks and security validation, ensures functionality and compliance before stakeholder sign-off enables legacy system decommissioning.

The next step following migration and validation of data is the migration or replication of features and functionalities. This encompasses the transition of existing services, applications and user workflows from the legacy eMap system to the new eMap platform. This process is not merely a "lift-and-shift" operation; it presents an opportunity to modernise, optimise and re-align these functionalities with the capabilities and architectural principles of the new cloud-native environment.

5.3.1 Re-development and Configuration Strategy

The strategy for re-developing and configuring legacy functionalities on the new eMap platform is guided by the core principles of modernisation, leveraging Azure PaaS capabilities and adhering to established data governance frameworks. The objective is to ensure that all functionalities are robust, scalable, secure and aligned with the new architectural patterns.

Guiding Principles

The re-development or re-configuration of any service, application or workflow must adhere to the following guiding principles:

  • Leverage Modern Architecture: All re-developed components should integrate with and leverage the new platform's architecture, including Azure PaaS services (Azure Database for PostgreSQL for enterprise data, Azure Blob Storage for caches and content, Azure Data Lake Storage Gen2 for rasters), automation via OpenTofu and Configuration Management and the use of cloud-optimised data formats.
  • Prioritise Re-architecture over Simple Lift-and-Shift: Where beneficial, legacy components should be re-architected rather than simply migrated as-is. For example, an outdated, complex script might be re-implemented as a more efficient VertiGIS Studio Workflow, an Azure Function, or a series of orchestrated geoprocessing services. This ensures long-term maintainability and optimal use of cloud services.
  • Adherence to Data Governance: All re-published services and re-developed applications should adhere to the data governance policies outlined in Chapter 3. This includes sourcing data from the correct, designated data stores.
  • Security by Design: Security considerations, aligned with the Zero Trust model, must be integral to the re-development process. This includes secure authentication to services, appropriate authorisation controls and interaction with the WAF and ADC as necessary.

Services Migration and Re-publishing

The migration of GIS services involves a thorough review of the legacy service inventory and a systematic re-publishing process.

  • Source Data Alignment: All re-published services must source their data from the new, authoritative data stores.
    • Services based on vector enterprise data should be published from datasets migrated to Azure Database for PostgreSQL. This approach centralises data control, improves data integrity and leverages the performance and scalability of the PaaS database.
    • Raster services should be published from cloud-optimised formats (CRF, MRF) stored in Azure Data Lake Storage Gen2. Using these formats enhances performance and manageability for large raster collections.
  • Geoprocessing Services: Existing geoprocessing services must be evaluated. Simpler services might be re-published, ensuring their scripts are updated to interact with the new data sources and Azure environment. More complex or less efficient geoprocessing services should be considered for re-implementation using modern alternatives, such as Python scripts executable by Azure Functions, scheduled jobs, or as components within VertiGIS Studio Workflows.
  • Map and Feature Service Caching: For cached map and feature services, new caches must be generated in the CompactV2 format, suitable for cloud storage. These caches will reside in Azure Blob Storage. The cache generation process itself should be automated, ideally integrated into CI/CD pipelines. Manual cache creation is discouraged.

Applications Migration and Re-development

Custom applications that consumed legacy eMap services will require assessment and potential modification or re-development.

  • Existing Web Applications:
    • Applications planned for retention must be updated to consume services from the new eMap platform's endpoints. This includes updating service URLs and ensuring compatibility with any changes in service structure or authentication mechanisms.
    • Security configurations must be reviewed to ensure these applications correctly interact with the WAF and ADC.
  • Desktop Tools and Scripts (e.g., ArcPy-based):
    • Connection files (e.g., .sde files for ArcGIS Pro) must be updated or re-created to connect to Azure Database for PostgreSQL.
    • Scripts interacting with data or services must be revised to use new data paths, service endpoints and potentially updated Python libraries for interacting with Azure services (e.g., Azure SDK for Python if directly accessing Blob Storage or ADLS Gen2, though service interaction is preferred).
  • Re-development Opportunities: The migration provides an opportunity to re-develop legacy applications using modern frameworks and technologies, potentially leveraging serverless architectures (e.g., Azure Functions for backend APIs) or contemporary web development stacks that integrate seamlessly with the new ArcGIS Enterprise services.

Workflows (ETL, User Workflows)

Existing ETL processes and user-facing workflows require careful review and adaptation.

  • Legacy ETL Processes: ETL jobs that populated the legacy eMap system must be re-evaluated. Many may be re-implemented using Azure-native data integration services such as Python scripts using libraries such as GeoPandas and SQLAlchemy to interact directly with Azure Database for PostgreSQL and other Azure storage services.
  • User Workflows (VertiGIS Studio Workflows):
    • Existing Geocortex Workflows should be reviewed and re-developed to align with the new eMap platform's architecture and service endpoints.
    • New VertiGIS Studio Workflows should be developed to replace legacy manual processes or outdated workflows.

Automation of Re-configuration and Deployment

To ensure consistency and repeatability, the configuration of re-published services (e.g., service definition files, publishing parameters) and the deployment of re-developed applications and workflows should be automated as much as possible. This includes: * Managing service definitions as code. * Using CI/CD pipelines to publish services, deploy applications and update VertiGIS Studio Workflow items. * Externalising environment-specific configurations for applications and workflows, rather than hardcoding them.

Side-by-Side Testing

Where feasible and appropriate, conduct side-by-Side testing by running the legacy functionality and the newly re-developed/re-configured functionality in parallel for a defined period.

  • Methodology: This allows for direct comparison of outputs, behaviour and performance characteristics. Key test cases should be designed based on common user scenarios and critical business operations.
  • Tools: This may involve a combination of automated comparison scripts (e.g., for data outputs or API responses) and manual validation by subject matter experts (SMEs) and Data Stewards.

Performance Testing

While broader platform performance testing has been covered, specific performance tests should be conducted for critical re-developed services and applications to ensure they meet non-functional requirements (NFRs) related to response times, throughput and concurrency under expected load.

Security Testing

Re-developed applications and re-published services must undergo security validation to ensure they integrate correctly with the platform's security infrastructure (WAF, ADC) and adhere to Zero Trust principles. This includes verifying authentication, authorisation and secure data handling.

Sign-off

Sign-off from business owners and Data Owners/Stewards is required upon successful completion of UAT for each component or batch of functionality. This sign-off confirms that the migrated/re-developed features are accepted and ready for production use, paving the way for the decommissioning of the corresponding legacy components.