|Automated business applications mapping and logical diagramming:|
|Software (applications) to server mapping.||Yes||Yes|
|Process-to-process or server-to-server connectivity diagramming.||Yes||Yes|
|Business application architecture and logical dependecies diagramming with minimal or no IT staff interviews.||Yes||No|
|Quality application topology models (accurate enough for non-trivial automated topological analysis):|
|Middleware objects level connections discovery (e.g., what database inside of an instance is used by an application).||Yes||Some|
|Ultra-deep software models (e.g., modeling Job schedulers at the individual Job dependencies level).||Yes||No|
|Deep software models (e.g., modeling individual databases and their tablespaces).||Yes||Some|
|Shallow software models that identify software installations, their version, vendor, and location.||Yes||Some|
|Generic software models (to identify custom and rare software out-of-the box).||Yes||No|
|100% active software detection in a short time-frame (including custom software).||Yes||No|
|Classification of every dependency by type (e.g., IT infrastructure, NAS, etc.).||Yes||No|
|Automated application dependencies analysis algorithms for:|
|Identification of unused IT assets.||Yes||No|
|Client data classification (for security and BI).||Yes||No|
|Reliability and availability analysis.||Yes||No|
|Enterprise software licensing analysis.||Yes||No|
|Services-friendly fast and complete deployment that relies on the existing client tools and access policies.||Yes||No|
The data collection does not involve any memory-heavy operations like running a java VM - only the basic shell commands are involved.
The process is started with low priority. Therefore, any production activity on the servers gets CPU time first and is not impacted. However, on an idle system the data collection process may cause significant CPU overheads during the first seconds after its start, which is not a big deal since the system is idle.
We can typically use the available data from the existing tools and spreadsheets. However, the data modeling and collection quality of the existing tools is typically not enough to run automated graph-analysis algorithms and produce useful results. Such data requires extensive manual labor and manual analysis to be useful. So we typically have to run our data collection too.