System Requirements - Logi Info

This topic presents the general hardware and software requirements use of Logi Info and the products and data sources it supports.

General Requirements

The following general hardware and software components are supported by the latest releases of Logi Info. Note that Logi add-on modules and related products may not support all data sources and supporting software listed below; refer to their introductory documentation for details.

IMPORTANT: If you are using Info v12.7 SP1-9 and you wish to upgrade to JDK 8 release build 261 (1.8.0_261), or any later build of JDK 8, you will need to replace the mscorlib.jar file in the application folder with the new one and restart your server. The mscorlib.jar file can be found on our Product Download page. Please contact Support for additional assistance.

Browsers - Chrome, Firefox: all current public versions (Certified)
- Microsoft Edge: all current public versions (Certified)
- Microsoft Internet Explorer: 11 (Certified); see note 3 below about earlier versions
- Safari, Opera: all current public versions (Supported)


Development, QA/Test, Production, and Disaster Recovery servers: - Modern, high-speed CPU, with 4 cores, minimum
- 8 GB RAM, minimum
- 100GB storage device
- 2 GB free disk space minimum (with .NET or JDK already installed) See the Sizing for Performance section below for more information.

Operating Systems

64-bit versions of the following are supported:
- Windows Server 2019 (only v12.7+), 2016 (only v12+), 2016, 2012 R2, 2008, 2003
- Windows 10 (Logi Info v12+ only)
- Windows 8 (all editions except RT)
- Windows 7 (all editions)
- Red Hat, Ubuntu, CentOS, SUSE, and most other flavors of Linux

Web Servers - Internet Information Server (IIS) 6.x, 7.x, 8.x
- Internet Information Server (IIS) 10 (Logi Info v12+ only)
- Apache-Tomcat 6, 7, 8 (without Tomcat FHS), 9
- JBoss 4, 5, 7, EAP-6.3
- GlassFish (SJSAS) 2.1, 3.0, 3.1, 4.0, 4.1
- WebLogic 10, 12c
- Websphere 7, 8.5
- WildFly 8, 9, 10

Supporting Software

Required for all development work, using Logi Studio:
- Microsoft .NET Framework 4.6+ Required for development and execution of Logi .NET applications on Windows servers:
- Microsoft .NET Framework 4.6+ Required for development and execution of Logi Java applications on Windows or Linux servers:
- OpenJDK 8, 11, 12, 13, 14 (see note below regarding 11+)

Oracle has changed its Java usage policies - see Java Usage Policy for important information.

Data Sources Logi Info can connect to, use data from, and in most cases write back to the following data sources: - DB2 database server
- Files: JSON, XML, Excel, CSV
- Google Docs, Google Maps
- HP Vertica
- JDBC-compliant database servers
- Microsoft SQL Server database server
- Microsoft SQL Server Analysis Services (OLAP)
- MongoDB (up to and including v2.6, but not later versions)
- MySQL database server (excluding v5.5)
- ODBC-compliant database servers
- OLEDB-compliant database servers
- Oracle database server
- PostgreSQL database server
- Progress OpenEdge database server
- Sybase database server
- Web Services (REST and SOAP) With the addition of supporting products such as Logi DataHub, many other online commercial data sources can be accessed. See each product's System Requirements or introductory documents for more information.


1. IIS must be installed before installing Logi products. Logi .NET applications on Windows and Logi Studio require the .NET Framework 4.6+. If not already in place, with your consent, appropriate versions of the .NET Framework are installed when Logi products are installed. They are also available for free from the Microsoft Download Center.

2. More information for Linux developers about Logi Java applications is available in About Logi Apps and Java.

3. While Internet Explorer versions 7-10 are certified for use with Logi Info v12.x, Microsoft has ended support for them. Microsoft has also ended support for IE 5 and 6. You should use Microsoft Edge for viewing applications built with Logi Info v12.x and later.

4. Microsoft ended support for .NET Framework 4.5 in January 2016. .NET 4.6 is required for Logi .NET applications 12.6+.

5. Use of Java 11+supported in Info v12.6 SP2 and later, requires special configuration - see Java Server Configurations for more information.

Adobe no longer supports their Flash Player, and has blocked Flash content from running in a Flash player since January 12, 2021. Select this link for more information.

Back to top

Sample Architecture

The architecture below represents a deployment of Logi Info and the Logi Scheduler service to AWS. In this example, the Logi Info application and the parent application in which it’s embedded are hosted in a Virtual Private Cloud behind a load balancer. EC2 instances hosting the Logi Info application can be spun up or down according to user demand.

This example also assumes that shared resources (such as Bookmark files and SecureKey references) are managed in a database accessible to each EC2 instance. An alternative approach is to configure a network share in place of the database.

Back to top

Sizing for Performance

The following are general sizing considerations for a Logi Info web application, followed by high level recommendations for server components based on these considerations. Specific recommendations will vary depending on your implementation.

To accurately determine the appropriate deployment model for your Logi Info applications, take the following into consideration:

  • Maximum Number of Concurrent Users - As with any system, the application needs to respond to end users in a timely fashion, even during peak load times.
  • Number of Concurrent Data Visualizations - The number of visualizations that are loaded concurrently into a page or dashboard will affect performance.
  • Complexity of Data Visualizations - Some data visualizations present a single metric and dimension, while others present multiple metrics and/or dimensions. More resources are required to render complex data visualizations.
  • Volume of Data Used to Generate Data Visualizations - Logi Info data visualizations range from simple, summary charts to interactive, self-service analysis tools. The volume of data required to be delivered from the data tier to the application tier to generate these visualizations can range from a few dozen records to tens of thousands of records.
  • Logi Info-Based Data Aggregation or Manipulation - Whenever possible, data manipulations, such as calculations and aggregations, should be performed within the data server or system because they're optimized to perform these operations. In addition, the amount of data transferred over the network from the data system to the Logi application should be limited to the actual needs of the data visualization. When it is necessary to perform data manipulations in the Logi application, the type of manipulation and the volume of required data will have an impact on the resources available.


Based on load-testing of benchmark data from our Large Enterprise customers, we have established the following recommendations:


The following are guidelines for a typical scenario.

  • Minimum Configuration Cores: The minimum number of required CPU cores is determined by the expected number of end users, as follows:

End Users

Baseline # Cores

1 - 100


101- 250


251 - 500


501 - 1000


> 1000

Custom Sizing Required

Using the minimum configuration as a starting point, the following factors will determine the processor requirements for your production environment:

  • High Concurrency - If you anticipate end-user concurrency of greater than 10%, increase the number of cores by 25% to 50% of the Minimum Configuration Cores.
  • Application Complexity - If you judge that your application is complex, based on the considerations presented earlier (e.g. high number of visualizations in a single page, complex workflows, and complex data processing at the app tier), we recommend that you assess the impact of that complexity on overall performance and sizing. It is not unusual to add an additional 25% to 100% of the Minimum Configuration Cores to ensure better performance.

  • High Availability - Load-balanced, high-availability systems will require an increased core count. It is typical for Large Enterprises to deploy an additional 50% to 100% of the Minimum Configuration Cores based on the target service level.


We recommend a 2-to-1 ratio of gigabytes of memory to CPU core (i.e. two GB of RAM per core).


Logi Info applications are not memory-constrained because they leverage a hybrid caching (see The Logi Server Engine) mechanism that utilizes both disk- and memory-based storage when performing data acquisition, data aggregation, visualization generation, and other data-intensive operations.

Using the mechanism, a Logi application dynamically allocates storage resources depending on the processing stage in order to optimize their use. Therefore, we recommend that the server utilize dedicated, "fast" storage devices, such as SSDs or high RPM disk drives. Total storage usage is highly variable and is dependent on the size of the data volumes being processed and cached by the application. At a minimum, drives with a capacity of 100GB are commonly used.


In addition to the primary production environment, other environments should be appropriately sized as well:

  • Development Environments - We recommend a four core minimum for any development environment. For environments with sophisticated development needs, such as a high number of developers, continuous integration development style, and test driven development, it's typical to size a development environment at 25% to 50% of the baseline number of cores, with a minimum of four cores.

  • Quality Assurance and Load Testing Environments - Requirements for load testing or automated QA practices typically require a mirror images of the production environment, with the same number of cores in the same machine configuration, in order to accurately replicate production performance characteristics. If load testing is not required, then 25% to 50% of the production core sizing is ideal for more complex environments or automated QA, with a minimum of four cores.

  • Disaster Recovery Environment - Some large enterprises require a stand-by system for disaster recovery. In this scenario, the Disaster Recovery system is typically a mirror image of the full production environment.

How Many Servers Do You Need?

Your production web server hosts your Logi applications, which run as extensions to the web server. Ideally, this computer should be dedicated to this task alone.

However, other configurations are possible, including those in which the web server also serves other functions (i.e. is not dedicated to Logi applications alone) and/or is also the database server. Some Logi large enterprise customers use multiple "clustered" servers for top performance and high reliability.

Numerous factors external to your Logi application can affect this decision, including the amount of general web server traffic, the number of concurrent database users, the size of the databases, the complexity of the database queries, the frequency of access, and, not least, cost.

You may care to begin with a combined configuration and, as your report usage grows, change to a dedicated configuration. The nature of Logi products allows you to do this easily and often without additional cost (if CPU core count remains constant).

Recent studies concerning server virtualization suggest that database servers are frequently under-utilized. On the other hand, many database vendors recommend that their products be run on a dedicated server. You may wish to check with your database vendor for their recommendations concerning database servers.

Back to top

Scaling Logi Info Applications

Logi Info can be deployed to clustered environments on premise or in the cloud. This section outlines how to prepare a Logi Info deployment for scaling with user demand and greater data volumes over time.


While Info applications can adapt for vertical or horizontal scaling, Logi recommends a horizontal scaling approach. The following items are crucial for planning a Logi Info deployment for horizontal scaling:

  1. Sticky Session Configuration is Recommended – Some application features function best in a load-balanced environment in which user sessions are “sticky” to the server on which they originate.
  2. Some Application Resources are Shared – Saved dashboard configurations (known as Bookmarks) and other key items, such as reference files generated by Logi’s SecureKey SSO method, should be stored on a network share or in a database accessible to all servers in a cluster hosting the Logi Info application.
  3. For Autoscaling, Scheduling is Recommended - Common patterns include usage surge over weekday mornings when users get into office and nightly scheduled report generations. Multi-threaded requests increase CPU usage, and can result in unnecessary VMs being provisioned when using an auto-scaling algorithm.
  4. Embedding – If using the Logi Embedded API, the parent app should provide the load balanced URL to client browser.
  5. Scheduling – Logi Scheduler is a standalone service that can be scaled independently. However, typical recommendation is to install one scheduler per production environment.
  6. Upgrading – When upgrading Logi info, all load balanced environments will need to be updated.

Configuring for Load Balancing

The following checklist can be used when deploying to a node in a load balanced environment for the first time. This process can be automated as part of a larger Dev/Ops process to increase productivity.

  • Prepare Server(s) – Install preferred IIS or Java web server container. Size servers for planned load and verify that each node is networked to other environment dependencies including data sources and file shares.
  • Install add-on services like Logi Scheduler – Where scheduling capabilities are required, the Scheduler service will need to be installed separately.
  • Deploy Logi Info application – Promote the application and relevant support files from a staging environment.
  • Configure Settings – Some Logi application settings are environment specific such as license file location, IP ranges for SSO configuration and potentially data source connections. Values for these settings can be dynamically set in an automated process.

For more information, see Load-Balancing Configuration.

Back to top

Server Virtualization

Many organizations are using server virtualization to maximize hardware usage and reduce costs.

Server virtualization products allow the assignment of CPU resources to processes. This may take the form of a maximum percentage of combined CPU utilization, or as specific allocation of logical CPUs, to a virtual machine (VM). The server administrator is responsible for making these configuration decisions. Logi Analytics' product licenses treat a VM just like a regular, non-virtualized server.

In order to ensure good performance in any virtual server environment, administrators must be careful to allocate appropriate resources to VMs.

It's not uncommon to relocate a VM from one hardware platform to another, for example, for hardware maintenance. The Logi license will "move" with the VM, as long as the machine name remains unchanged.

Back to top

Container Deployments

The Logi Analytics Platform and Logi applications can be bundled into and executed from container environments, such as Docker. Some of our customers have had success deploying their Logi applications in this manner.

Logi Professional Services staff may be able to assist you with such a deployment, for a fee. However, we don't recommend any particular container over another and we don't certify that Logi applications will work using a container.

Docker Instructions

A best practice for Docker is to separate the Tomcat container and Logi Scheduler into their own nodes behind a load balancer (for information about load balancing, see Load-Balancing Configuration).

  1. Copy Logi Info App to Linux host (follow configuration steps in Deployment Checklist below for details)

    1. Must have security configured with users, sharing rdSecurekey configured for multiple nodes

    2. Must have Data connections set up for production configuration

    3. Must have rdError sharing set up for multiple nodes (complete with custom error page)

    4. Must have an OEM license

    5. Scheduler connections setup to work behind a load-balanced end point

  2. Install Docker

  3. Pull Tomcat image from Docker Hub

  4. Run Tomcat container and enter shell to install Scheduler

    1. Docker run -it Tomcat bash

    2. Install Scheduler

    3. From a second shell commit changes to Tomcat Docker image as Tomcat

  5. Create Dockerfile to buil dimage that includes Logi App and Scheduler content

    1. Transfer Info app to container

    2. Transfer script that starts both Scheduler and Tomcat

    3. Run start-up script(s)

    4. Expose the ports required by Tomcat and Scheduler

  6. Build a new image (you must do this every time you make a change to the Info app)

    1. Docker build -t Tomcat

  7. Run Tomcat

At this point, you will have a container with your Logi app working on port 8080. The following steps show you how to use Docker-compose to stand up multiple instances of Info and load balance them with Nginx:

  1. Write docker-compose.yml file

  2. Run Docker-compose

    1. docker-compose up -d

  3. Scale Info up:

    1. docker-compose scale app=2

Useful Links

The following links are resources for implementing Docker:


Multiple executables in container script:

Bind Mount:

Dockerfile Example

Below is a file that accompanies the directions mentioned in the Docker-compose scale with sticky sessions link above. This example is not meant to be run, it is simply a guide for creating your own:

FROM tomcat

MAINTAINER Author Name <>

ADD /InfoGo /usr/local/tomcat/webapps/InfoGo

ADD /usr/local/tomcat/

#CMD ./usr/local/tomcat/

CMD ["/usr/local/tomcat/"]

EXPOSE 56982


Docker-Compose Example

Below is an example of a docker-compose file:


image: tomcatdiscovery








image: tpcwang/nginx-proxy




Back to top

Cloud Deployments

Cloud-based hosting services, such as Amazon Web Services (AWS), can host Logi Info Platform components and Logi applications. Customers are typically hosting Info Platforms on general purpose VMs hosted by cloud providers. For example:

For a representation of a deployment of Logi Info to AWS, see the example in Sample Architecture.

If you want to host Logi Info applications, you should use at a minimum:

  • VPC with multiple Availability Zones (required for load balancing)
  • Two EC2 M5.xlarge (4-core, 16Gb RAM, 50 Gb EBS Internal storage)
  • EFS Storage (5+ Gb) for storing Error logs.
  • RDS (Microsoft SQL, MySQL, PostgreSQL, Oracle) with 10 GB+ storage to share information between server instance and to persist user state, including
    • security handshake information
    • user bookmarks
    • user activity logs (using Event Logging, which developer defines in Info application)
  • Application Load Balancer (ELB) with Sticky session

If you'd like to try deploying your Logi Info application to AWS by yourself, this blog post from dbSeer may be useful. It includes a link at the bottom to step-by-step instructions. Note that we do not provide support related to this third-party blog post.

Otherwise, Logi Professional Services staff may be able to assist you with such a deployment. However, we don't recommend any particular service over another.

Choosing EC2 Over Other Services

Logi Info platform profile:

  1. Logi Info based apps are based on a classic, monolithic architecture

  2. Application combines multitude of features beyond HTML generation, e.g. PDF exporting, which rely on many additional services and libraries

  3. Application depends on very high file I/O

  4. Application depends on web server capabilities

  5. Sticky Load Balancing strategy is recommended due to high file I/O use-cases

Virtual machines are a recommended approach since they support flexibility required for above profile.

AWS provides EC2 as a flexible VM solution. AWS provides command line interface to create and manage EC2 based infrastructure, which allows for integratinginto existing scripting solutions, e.g. Ansible, Chef, etc.

AWS Lambda

AWS Lambda is a serverless environment. There are restrictions on how application can be managed or deployed. Also, it may impact Logi Info platform’s capabilities, e.g. PDF generation.

AWS Container Service

Docker containers are a great alternative to manually defining server environment. More information on packaging Logi Info platform in containers is covered in a separate document.

Benefits of using containers include:

  • Developers can control the application environment; DevOps have limited interaction

  • Containers can be deployed to other cloud providers with limited reconfiguration

Drawbacks with containers, compared to EC2 VMs:

  • For non-OEM licenses, provisioning licenses based on hostname of instances can be challenging

  • Containers can run only single services. Scheduler service (optional) will need to be deployed separately and networked with Info platform. This will require additional configuration.

Deployment and Auto-Scaling

AWS Beanstalk

Beanstalk makes it easier to provision EC2 instances and deploy code. Some concerns about lack of control over resources provisioned and monitoring state.

AWS Cloudformation Template

Users access Logi Info application through customer’s existing web application. Logi Info application is embedded into existing parent application. The application is dependent on customer’s existing infrastructure architecture. We do not have templates to support different scenarios. With above recommendations, customers can update any existing templates to provision appropriate infrastructure to host Logi Info application.

Auto-Scaling Strategy

Majority of our customers use scheduled scaling strategy to support known usage patterns. Common patterns include usage surge over weekday mornings when users get into office, nightly scheduled report generations.

Logi Info application has built-in capability to multi-thread individual user requests for dashboards and other analytical capabilities. This optimizes resource usage and speeds up delivery. Multi-threaded requests spike CPU usage. Most Auto-scaling solutions monitor CPU usage to provision and scale up additional VMs. Auto-scaling algorithms can clash with the usage and unnecessarily provision VMs. Hence, the scheduled scaling approach.

Back to top