Aurora PostgreSQL Internal Tables vs S3 External Data: A Real-World Performance & Cost Benchmark

Aurora PostgreSQL supports reading data directly from Amazon S3 using aws_s3 functions, making it possible to offload heavy or cold data without fully moving it to a data lake platform. But does it perform well enough for analytical queries? In this benchmark, I tested identical datasets—one stored locally in Aurora tables, and the other offloaded to S3 and queried through external table definitions—to compare latency, resource impact, and practical usability.

Aurora PostgreSQL doesn’t provide a traditional federated query capability for S3 CSV files like some other database systems. Instead, you can:

  • Use External Tables via aws_s3 Functions
  • Create a View that dynamically loads S3 data
  • Use a Stored Procedure for ad-hoc S3 queries
  • And the list goes on…

I chose the first option because it offers several advantages:

Performance Control – You can optimize the function for your specific use case
Reusability – Create once, use multiple times with different parameters
Memory Management – Uses temporary tables that are automatically cleaned up
Flexibility – Easy to modify for different S3 files or filtering
Error Handling – Ability to implement proper error handling and logging
Cost Efficiency – Data is loaded only when actually needed

Setup PMM Server on EC2 for Aurora Monitoring – Part 2

Now when Percona PMM is installed and up – see Part 1 – lets add Aurora database to it. For this I created one small Aurora (PostgreSQL Compatible) database, engine 16.6, db.t3.medium.

Setup PMM Server on EC2 for Aurora Monitoring – Part 1

Percona Monitoring and Management (PMM) is an open-source tool for monitoring databases like PostgreSQL and MySQL, including Amazon Aurora. We will deploy PMM Server on an EC2 instance using Docker for a fast and simple setup. This allows us to collect and visualize performance metrics from Aurora in real time. The EC2 instance acts as a central monitoring hub without needing to install agents on the database itself.
Here are the step for PMM setup

Oracle RDS Custom – Read Replica

Oracle RDS Read Replicas on AWS provide enhanced scalability and performance by allowing you to create up to five read-only copies of your primary Oracle DB instance. These replicas asynchronously replicate data from the source DB, making them ideal for read-heavy workloads, analytics, and disaster recovery scenarios. They help offload read traffic and can be promoted to standalone databases if needed.

Lets create Oracle RDS read replica and see what is behind the curtain. I will use Oracle RDS Custom – to be able to login to the underlying machine. Creating Oracle RDS custom is easy – first you must create (Custom Engine Version) – check here for details, then using this CEV, Oracle RDS Custom database can be created. You can find more information here.

 

Oracle RDS Multi-AZ Failover: High Availability in AWS

Amazon RDS Multi-AZ deployments enhance high availability and reliability for database instances by automatically replicating data to a standby instance in a different Availability Zone (AZ). In the event of a failure, AWS performs an automatic failover to the standby, minimizing downtime and ensuring business continuity. This setup is essential for mission-critical applications that require seamless database availability and disaster recovery.

Lets see how much does it take to perform Oracle RDS database failover? To find out, I created a small Oracle RDS database and tested failover with AWS CLI.

Deploy Oracle on ECS Fargate

If you want auto-scaling and managed containers, AWS ECS has to be used. It has many advantages and some of the main ones are
– Easier Oracle deployment without full VM installations
– More flexibility compared to RDS (can run XE, SE2, or Enterprise)
– Good for cost savings (shutdown when not in use)
– Enables containerized workflows on AWS
– Works well with ECS, Fargate, or Kubernetes for scalability

Lets go through the steps and see how it works.

Add Oracle XE Docker Image to ECR Repository

Here’s a short description how to get an Oracle XE Docker image from the Oracle Container Registry (https://container-registry.oracle.com) and add it to ECR Repository.

1. Prerequisites
Docker must be installed and running on your local machine.
AWS CLI must be configured with the necessary permissions to access ECR
Oracle account is needed to access the Oracle Container Registry.

Docker and AWS CLI can be installed on your local machine or if you prefer on small EC2 instance with free tier eligible. Here EC2 instance will be used.
How to install docker on EC2 – check here.

Deploy Oracle on Docker in EC2

Deploying Oracle Database inside a Docker container on AWS has the following advantages –

– Easier Oracle deployment without full VM installations.
– More flexibility compared to RDS (can run XE, SE2, or Enterprise).
– Good for cost savings (shutdown when not in use).
– Enables containerized workflows on AWS.
– Works well with ECS, Fargate, or Kubernetes for scalability.

And of course it is good possibility to play a bit with Oracle on AWS and to refresh your skills with one simple hands-on exercise.

Running Oracle on Docker is simple and gives you full control over your container.

Oracle RDSCustom – Performance Analysis of AWS db.m5 Instances: Sequential & Random Read/Write Benchmarks

This post presents the results of IOPS performance testing for Oracle running on Custom RDS in AWS. The tests were conducted using the fio benchmarking tool across three instance types: db.m5.large, db.m5.xlarge, and db.m5.4xlarge. The storage used was General Purpose SSD (gp2) volumes. The database remained active but was not under any load during the tests.
Here are Step-by-Step guide to creating a CEV and deploying an Oracle Database on RDS Custom.

Oracle Linux on EC2

AWS offers different preconfigured OS images (Amazon Machine Images or AMI) to be used by launching EC2 instance – Amazon Linux, Ubuntu, RedHat, SUSE, but still not Oracle Linux(OL).
This doesn’t mean that it is not possible to have own EC2 instance running OL – here’s how this can be done.