Discovery application and infrastructure portfolio

Try Paymo: fast, easy, and efficient project collaboration software

Discovery application and infrastructure portfolio

The setup project controls will be followed by a discovery of the application and infrastructure portfolio, where the defined boundaries will be detailed and criticality identified. It is to ensure optimal knowledge transfer and minimal disruption to the business organization, when conducting the discovery.

At the core of the discovery and planning activities, the scope of the application and infrastructure  portfolio that is targeted for migration need to be clear. Whether it is a small subset of a portfolio or multiple datacenters, the project must have a specific scope before moving forward. It need to get a gauge for how much data is around the IT estate and how accurate and relevant is the data. Often times the data is not useful or there is a low level of fidelity associated with it. Also probe on what the business drivers are that may impact the prioritization of the application and infrastructure s that will be migrated. It could be de-risking assets (end of life, end of support, un-licensed assets), purely financial (need to be out of the data center or colocation by a specific date), or have an active roadmap for the application and infrastructure s and time to market is too slow on-premise. Other points to focus on are any technical constraints (unsupported OS, mainframes, etc.). Work to determine what the proper scoring weights should be to prioritize applications. Then create the prioritized backlog of applications based on the discovered data and the agreed upon scoring mechanism.

Assessment for a cloud migration will entail study and analysis of several tracks namely infrastructure (server, storage and networks), middleware and databases and applications.

Most organization have disjointed and inaccurate views of their portfolios of applications. Whether it is a manually updated configuration management database (CMDB), spreadsheets, incomplete asset inventories, or purely tribal knowledge, their understanding of their IT estate is lackluster. Using the cloud migration method populate the cloud data repository with all available infrastructure, application and application relationships, analysis of the shared services, and analysis of the disaster recovery (DR) capabilities.

Discovery of servers and virtual machines (VM) is a straightforward process. It relies on interaction directly with the endpoint using an agent or managing hypervisor. The goal of discovery is to collect application and infrastructure information including type, configuration, usage, and running applications.

But it is also important to know on which operating system (OS) the applications will be deployed. The applications may only run on a specific OS, there are cloud providers which do not provide a 32-bit OS and others might have unexpected subscription requirements. It is best to do the research in advance.

Perform an analysis what the impact of the cloud migration is on the dependencies, such as payment gateways, SMTP servers, web services, external storage, and third-party vendors. Take into account that it is a tedious task to identify all of the integration points.

When completed the discovery, map any dependencies between the application and infrastructure s. This is critical for the migration of an application, to have a clear view and understanding of the infrastructure and processes the application relies on.

Discovery enables to ensure that each workload will function on the selected cloud platform. Through the collected analysis, discovery tools are able to provide the metrics on the compatibility of the workload in the cloud. Perform a configuration analysis for understanding which workloads are migrating with no modifications, workloads that may require basic modifications to comply, and any workloads that are not compatible in their current formation.

Below is a checklist example on the level of detail the application and infrastructure discovery needs to provide during the impact assessment and analysis.

Infrastructure assessment and analysis

  • Obtain the server and storage hardware details.
  • Identify and run non-intrusive tools to capture inventory details of infrastructure footprint.
  • Analyze the utilization and performance parameters.
  • Analyze as-is storage and compute distribution.
  • Obtain the hardware inventory of the IT room / datacenter, network and security.
  • Analyze current state of racks, power, cooling.
  • Analyze network connectivity and bandwidth requirements.
  • Assess the security standards and policies.
  • Identify cloud consolidation options for network and 3rd party connectivity.
  • Explore the cloud consolidation and optimization options.

Middleware and database assessment and analysis

  • Identify middleware solutions deployed through tools and questionnaire with stakeholders.
  • Analyze purpose and alignment with enterprise architecture.
  • Review enterprise integration strategy and roadmap.
  • Understand integration patterns.
  • Identify application dependencies to middleware and database.
  • Understand database instances deployed.
  • Validate through tools and questionnaire database to application mapping.
  • Understand middleware and database licensing and rationalization options.
  • Identify any inflight projects.
  • Determine the list of middleware component upgrades planned.
  • Evaluate options for middleware platform cloud consolidation.
  • Determine the list of middleware tools to be in the cloud future state as per enterprise integration strategy.

Application assessment and analysis

  • Identify key stakeholders and application business owners
  • Validate application dependency mapping to servers based on tools executed to discover applications
  • Analyze application workloads and non-functional requirements
  • Analyze in-flight initiatives
  • Derive application rationalization options
  • Map commercial of the shelf (COTS) and custom applications aligning to enterprise architecture
  • Understand application batch-jobs, online applications, 3rd party interfaces to applications
  • Map production and non-production footprint
  • Identify any applications that need upgrades and patching to be cloud readiness.
  • Identify the applications that are possible candidates for cloud consolidation.

Capture application and middleware foundation details

  • Business owners
  • Business criticality
  • Characteristics (e.g. stateful or stateless)
  • Technology stack fundamentals
  • Infrastructure
  • Dependencies

Capture server foundation details

  • OS image
  • OS version
  • OS vendor
  • OS patch level
  • Type (e.g. physical or virtual)
  • VM size
  • VM version
  • VM vendor
  • VM patch level
  • CPU requirements
  • RAM requirements
  • Disk requirements
  • Dependencies

Capture storage and database foundation details

  • Data structures
  • Database requirements (e.g. MySQL or NoSQL)
  • Capacity requirements
  • Caching requirements
  • Redundancy
  • Data compliancy (e.g. HIPPA)
  • Dependencies

Capture network foundation details

  • Connection type
  • Load and traffic requirements
  • Load balancer requirements
  • Security requirements
  • Dependencies

Tasks

  1. Select and implement cloud data repository tool.
  2. Select, implement, configure and run an automated discovery tool.
  3. Identify and prioritize application and infrastructure s to migrate.
  4. Identify key requirements, stakeholders and players.
  5. Discuss with the application owner to understand the key application parameters (expected cloud maturity) at the portfolio or application level.
  6. Perform application to infrastructure dependency mapping based on category, business unit, criticality, availability, location, end-user impact and complexity.
  7. Analyze the critical dependency and integration issues.
  8. Analyze the security and compliance requirements at high level to understand the bottlenecks.
  9. Analyze the data sensitivity and security controls.
  10. Analyze the exercise details which is carried out for the cloud migration.
  11. Understand the operations management details.
  12. Analyze the managed production services details (tickets and non-ticketed services) to estimate the effort savings while moving to different service models in cloud.
  13. Identify the high level cost and effort may involve for the migration.
  14. Understand the infrastructure, application and service, and team dependencies.
  15. Establish appropriate import routines where required.
  16. Perform gap analysis between data sources and data model and produce plan to resolve.
  17. Create initial standard discovery reports.
  18. Create baseline cloud data repository. This will become the one and only truth during the cloud migration!

Hints and tips

  • A fully functioning cloud data repository is more likely to be a relational database rather than a spreadsheet.
  • A relatively high degree of cloud data repository set-up and development will be required.
  • Create a cloud repository baseline.
  • During the migration, deviations from the established baseline can be early indicators of bugs or problems created during the migration.
  • After migration, the pre-migration baseline and the established goals can be used for judging the success of the migration and help determine when it can be considered complete.
  • If required, establish multiple baselines based on the application’s usage patterns. If it experiences usage peaks and valleys, establish baselines at multiple points and correlate to the specified usage patterns.
  • Compare and understand deviations from the baseline each step of the migration.
  • Keep the cloud budget in control and monitor the dynamic cloud environment post-migration.
  • Identify and engage the necessary SME skills.
  • The local IT team will be of great help. Engage their support but beware of overburdening.
  • The local IT team can provide the critical data from hardware and software resources and help format and present the critical data.
  • Remember to discover both source and target data elements.
  • The naming of applications is notoriously inconsistent. Consider using alias names if necessary.
  • Relating the applications to their underlying servers is the key to this activity.
  • Plan how to keep the cloud data current by continuously running the automated discovery tool.
  • It will be useful to do an initial application discovery during business case development to accurately reflect the scope.
  • It’s recommended to use an automated discovery tool.
  • Discovery tools may help but will take time to deploy; security sign-offs and time to interpret results. Do not rely 100% on these tools and be prepared to apply brain-power.
  • The various categories of shared services will lead to different mappings and relationships in the cloud data repository. The local IT team should be engaged to support this.
  • Source and target shared services may be different.
  • Decomposing discovery data into technology families and cloud migration strategies provides a high-level view of the project complexity:
    • Creates natural best practice work packages for project and resource planning.
    • Visual representation clearly communicates intended activities to business and technical stakeholders.
    • Facilitates risk identification toward risk management strategies.
  • What to look for in the portfolio discovery?
    • Profile discovery
    • Performance discovery
    • Tagging and grouping
    • Inventory export
    • Cloud VM instance recommendations
    • Dependency discovery
    • OS process discovery
    • Dependency visualization
    • API access
  • Portfolio data requirements examples:
    • Before starting the analysis ask the question what to discover.
    • Keep in mind the application connections, application and infrastructure dependencies, and the access patterns (internal/external).
    • Performance metrics is providing the right sized resources.
    • Service naming and tagging is identifying patterns, group servers and applications.
    • Web-based applications (accessed via web browsers).
    • Applications that have no dependency (or are loosely coupled) on other on-premise applications.
    • Applications with no shared data storage (SAN/NAS) with other applications.
    • Applications with databases less than 1 Terabyte (TB).
    • Applications running on 10 – 15 VM instances.
    • Acceptable downtime (less than 4 hours).
  • Below questions will ease picking a portfolio discovery tool:
    • How choose a discovery tool for the current environment?
    • How to deploy agents if it is an agent based solution?
    • Will the security policies let it share administrative credentials if needed with the tool?
    • Can the discovered data be stored in a location outside of the organization?
    • Is it required to have application-to-port mapping details?
    • Is any of the applications use custom ports?
    • Are there running any custom applications in the environment?
    • Are there any restrictions on the type of ports that can be used for scanning?
    • Is it required to have automated right sizing of the target environment?
    • Is it required to have estimated run costs of the target environment?
    • Is it required to have deep application performance monitoring?
    • Is it required to have deep infrastructure performance monitoring?
  • Run the automated discovery tools for about 4 weeks at the very minimum to gather sufficient data points.
  • Portfolio discovery tools for automating the discovery process:

Activity output

Try Wrike: fast, easy, and efficient project collaboration software

Leave a Reply