Amazon OpenSearch Service
An Elasticsearch-compatible managed search and analytics engine that supports diverse use cases including full-text search, log analytics, and real-time dashboards.
Overview
Amazon OpenSearch Service is a fully managed search and analytics service built on OpenSearch (an open-source fork of Elasticsearch). AWS handles cluster provisioning, patching, backups, and monitoring, significantly reducing operational overhead. Beyond text search as a full-text search engine, it is used for log visualization and analysis through OpenSearch Dashboards, and real-time monitoring via anomaly detection plugins. UltraWarm and Cold storage tiers enable cost-effective long-term retention of large volumes of log data.
Index Design Determines Query Performance
OpenSearch performance is largely determined by index design. Since the number of shards cannot be changed after index creation, the initial design is critically important. The recommended size per shard is 10-50 GB - exceeding this degrades query latency and extends recovery times. For example, when ingesting 100 GB of logs daily, the common approach is to create date-based indices (named like logs-2026-04-19) and assign 3 shards to each index. Mapping definitions should explicitly specify field types. Relying on dynamic mapping can result in numbers being recognized as text type, making aggregation queries unusable. The distinction between keyword and text types is also important - use keyword for exact-match filtering and text for full-text search. In practice, the standard approach is to define Index Templates that automatically apply mappings and shard settings to new indices. By configuring Index State Management (ISM) policies, you can automate lifecycle management - transitioning indices to UltraWarm after a certain period and deleting older ones.
UltraWarm and Cold - Tiered Strategy for Cost Optimization
In log analytics workloads, recent data is queried frequently, but access frequency drops sharply for data that is several months old. OpenSearch Service provides three storage tiers aligned with this characteristic. Hot nodes use SSD-based instance storage and handle real-time writes and fast queries. UltraWarm is warm storage backed by S3, reducing storage costs by approximately 90% compared to Hot nodes. It is read-only but queryable, suited for use cases that can tolerate latency of a few seconds to tens of seconds. Cold storage is even more cost-effective and requires an attach operation before querying, making it ideal for storing logs that must be retained long-term for compliance requirements. Azure Cognitive Search lacks this kind of tiered storage mechanism and keeps all data in a single storage tier, giving OpenSearch Service a cost-efficiency advantage for long-term storage of large log volumes. Related books on log analytics (Amazon) introduce tiered design approaches with practical examples.
Choosing Between Managed Clusters and Serverless
In addition to traditional managed clusters, OpenSearch Service introduced OpenSearch Serverless in 2022. Serverless eliminates the need for cluster sizing and shard management, with OCU (OpenSearch Compute Unit) auto-scaling that adjusts resources based on traffic. While operational overhead is dramatically reduced, there are tradeoffs. Serverless does not support custom plugins, and index lifecycle management through ISM policies is not available. Additionally, a minimum of 4 OCUs (2 for indexing + 2 for search) are always billed, so for small-scale workloads, managed clusters may be more cost-effective. As a decision guideline, Serverless is suitable when the number of indices is small and the operations team has limited resources, while managed clusters are better for large-scale workloads requiring fine-grained control over shard design and plugins. When choosing managed clusters, the recommended production configuration is multi-AZ deployment with dedicated master nodes (3 instances). Dedicated master nodes focus on cluster state management, preventing data node load from affecting cluster stability.