Feed aggregator

Generating DDL in Oracle SQL Developer

Tom Kyte - Mon, 2019-11-11 11:49
Dear AskTom-Team! Is there a possibility in the Oracle Developer to suppress duplicate DDL code? For example when generating the DDL from my relational model the foreign key constraint is generated twice, i.e. for both tables that are involved in th...
Categories: DBA Blogs

A simple Terraform script to create an AWS EC2 playground

Yann Neuhaus - Mon, 2019-11-11 02:39

When I started to write the blog about AWS SSM I quickly realized that I need a way to bring up and destroy my AWS EC2 playground in an easy and reproducible way. There are several options for this: I could have used the AWS command line interface or AWS CloudFormation. Both work pretty well and would have been more than enough for my simple requirements. In the end I decided to use Terraform for one simple reason: With Terraform you can not only provision on AWS but also on Azure, GCP and many others. So, instead of using a tool which is limited to AWS, using a tool which is vendor independent make much more sense.

For the AWS SSM blog I had several requirements I wanted to address:

  • The playground should run in my on VPC so I will not affect any other people doing tests at the same time
  • Because I had Windows and Linux EC2 instances two security groups should get created, one allowing SSH and one allowing RDP into the EC2 machines
  • Both security groups should allow outbound connections to the internet by using an internet gateway
  • Finally two Windows, two Ubuntu, two Red Hat, two SLES and two CentOS instances should get provisioned

Using Terraform all of this is actually quite simple. The first information you’ll need to tell Terraform is the provider you want to use. In my case this is AWS and I am using my “test” AWS profile which is configured for my AWS command line interface, the AWS region I want to use is Frankfurt:

// set the provider to AWS and the AWS region to eu-central-1
provider "aws" {
  profile    = "test"
  region     = "eu-central-1"
}

Because I want to limit direct access to the EC2 instances to my own IP address for security reasons I am defining a local variable I am changing each time I am setting this up from a different network. Input variables could also be used for that but in my case a local variable is just fine:

locals {
  my_ip        = ["37.201.6.8/32"]
}

The first component that actually gets provisioned on AWS is the VPC usinf the aws_vpc resource::

// create the virtual private network
resource "aws_vpc" "dwe-vpc" {
  cidr_block = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support = true

  tags = {
    Name = "dwe-vpc"
  }
}

This defines the IP range I’ll be using, switches the dns parameters to on and gets a tag. As I want my EC2 instances to be able to connect to the internet an internet gateway gets provisioned in the next step, which is attached to the VPC that was created in the step before:

// create the internet gateway
resource "aws_internet_gateway" "dwe-igw" {
  vpc_id = "${aws_vpc.dwe-vpc.id}"

  tags = {
    Name = "dwe-igw"
  }
}

Next we need a subnet, again attached to the VPC:

// create a dedicated subnet
resource "aws_subnet" "dwe-subnet" {
  vpc_id            = "${aws_vpc.dwe-vpc.id}"
  cidr_block        = "10.0.1.0/24"
  availability_zone = "eu-central-1a"

  tags = {
    Name = "dwe-subnet"
  }
}

Routing tables are an important concept in AWS and define how traffic is routed in the VPC. The routing table below enables traffic to the internet through the internet gateway:

// create routing table which points to the internet gateway
resource "aws_route_table" "dwe-route" {
  vpc_id = "${aws_vpc.dwe-vpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.dwe-igw.id}"
  }

  tags = {
    Name = "dwe-igw"
  }
}

Once the routing table is defined it needs to be attached to the subnet:

// associate the routing table with the subnet
resource "aws_route_table_association" "subnet-association" {
  subnet_id      = "${aws_subnet.dwe-subnet.id}"
  route_table_id = "${aws_route_table.dwe-route.id}"
}

For ssh and rdp access to the EC2 instances two security groups get provisioned:

// create a security group for ssh access to the linux systems
resource "aws_security_group" "dwe-sg-ssh" {
  name        = "dwe-sg-ssh"
  description = "Allow SSH inbound traffic"
  vpc_id      = "${aws_vpc.dwe-vpc.id}"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = local.my_ip
  }

  // allow access to the internet
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "dwe-sg-ssh"
  }
}

// create a security group for rdp access to the windows systems
resource "aws_security_group" "dwe-sg-rdp" {
name        = "dwe-sg-rdp"
vpc_id      = "${aws_vpc.dwe-vpc.id}"
description = "Allow RDP inbound traffic"

  ingress {
    from_port   = 3389
    to_port     = 3389
    protocol    = "tcp"
    cidr_blocks = local.my_ip
  }

  // allow access to the internet
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "dwe-sg-rdp"
  }
}

Both define an inbound rule (ssh and rdp) and an outbound rule for being able to connect the internet. Now, as all the basics are there the EC2 instances itself get provisioned based on the building blocks from above:

// create two Ubuntu instances
resource "aws_instance" "i-ubuntu-linux-prod" {
  ami                         = "ami-0cc0a36f626a4fdf5"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = {
    Name = "i-ubuntu-linux-prod"
  }  
}

resource "aws_instance" "i-ubuntu-linux-test" {
  ami                         = "ami-0cc0a36f626a4fdf5"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-ubuntu-linux-test"
  } 
}

// create two Amazon linux instances
resource "aws_instance" "i-amazon-linux-prod" {
  ami                         = "ami-0f3a43fbf2d3899f7"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-amazon-linux-prod"
  } 
}

resource "aws_instance" "i-amazon-linux-test" {
  ami                         = "ami-0f3a43fbf2d3899f7"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-amazon-linux-test"
  } 
}

// create two Red Hat linux instances
resource "aws_instance" "i-redhat-linux-prod" {
  ami                         = "ami-0badcc5b522737046"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device {
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-redhat-linux-prod"
  } 
}

resource "aws_instance" "i-redhat-linux-test" {
  ami                         = "ami-0badcc5b522737046"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-redhat-linux-test"
  }
}

// create two SUSE linux instances
resource "aws_instance" "i-suse-linux-prod" {
  ami                         = "ami-03b86a97a8f02d44e"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-suse-linux-prod"
  }
}

resource "aws_instance" "i-suse-linux-test" {
  ami                         = "ami-03b86a97a8f02d44e"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-suse-linux-test"
  }
}

// create two CentOS linux instances
resource "aws_instance" "i-centos-linux-prod" {
  ami                         = "ami-04cf43aca3e6f3de3"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-centos-linux-prod"
  }
}

resource "aws_instance" "i-centos-linux-test" {
  ami                         = "ami-04cf43aca3e6f3de3"
  instance_type               = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-ssh.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"
  
  root_block_device {     
    volume_size           = "10"
    volume_type           = "standard"
    delete_on_termination = "true"
  }
  
  tags = { 
    Name = "i-centos-linux-test"
  }
}

// create two Windows instances
resource "aws_instance" "i-windows-prod" {
  ami           = "ami-034937fd7f621ba85"
  instance_type = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-rdp.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "30"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = { 
    Name = "i-windows-prod"
  }
}

resource "aws_instance" "i-windows-test" {
  ami           = "ami-034937fd7f621ba85"
  instance_type = "t2.micro"
  key_name                    = "dwe-key-pair"
  vpc_security_group_ids      = ["${aws_security_group.dwe-sg-rdp.id}"]
  subnet_id                   = "${aws_subnet.dwe-subnet.id}"
  associate_public_ip_address = "true"

  root_block_device { 
    volume_size           = "30"
    volume_type           = "standard"
    delete_on_termination = "true"
  }

  tags = {
    Name = "i-windows-test"
  }
}

And that’s it. To check what actually will be done by Terraform there is the “plan” command. As the output is quite long I’ll skip most of it and just present the last few lines:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform plan

...

Plan: 19 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Once you are happy with that you can “apply” the execution plan and everything will get provisioned and you confirmed by “yes”:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform apply

Plan: 19 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Sit back, relax and one or two minutes later your AWS playground is ready:

aws_vpc.dwe-vpc: Creating...
aws_vpc.dwe-vpc: Creation complete after 2s [id=vpc-026cdc481d5365074]
aws_internet_gateway.dwe-igw: Creating...
aws_subnet.dwe-subnet: Creating...
aws_security_group.dwe-sg-ssh: Creating...
aws_security_group.dwe-sg-rdp: Creating...
aws_subnet.dwe-subnet: Creation complete after 0s [id=subnet-028e27fef8df3b963]
aws_internet_gateway.dwe-igw: Creation complete after 0s [id=igw-0656108a04d5ea0a5]
aws_route_table.dwe-route: Creating...
aws_security_group.dwe-sg-rdp: Creation complete after 1s [id=sg-0764508e3a5234393]
aws_route_table.dwe-route: Creation complete after 1s [id=rtb-07691bc54b40af0ae]
aws_route_table_association.subnet-association: Creating...
aws_instance.i-windows-prod: Creating...
aws_instance.i-windows-test: Creating...
aws_security_group.dwe-sg-ssh: Creation complete after 1s [id=sg-053995952f558a4ff]
aws_instance.i-centos-linux-test: Creating...
aws_instance.i-amazon-linux-prod: Creating...
aws_instance.i-amazon-linux-test: Creating...
aws_instance.i-redhat-linux-prod: Creating...
aws_instance.i-centos-linux-prod: Creating...
aws_instance.i-redhat-linux-test: Creating...
aws_instance.i-ubuntu-linux-test: Creating...
aws_route_table_association.subnet-association: Creation complete after 0s [id=rtbassoc-07c7d4282033c4a71]
aws_instance.i-ubuntu-linux-prod: Creating...
aws_instance.i-windows-prod: Still creating... [10s elapsed]
aws_instance.i-windows-test: Still creating... [10s elapsed]
aws_instance.i-centos-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-prod: Still creating... [10s elapsed]
aws_instance.i-redhat-linux-prod: Still creating... [10s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [10s elapsed]
aws_instance.i-redhat-linux-test: Still creating... [10s elapsed]
aws_instance.i-ubuntu-linux-test: Still creating... [10s elapsed]
aws_instance.i-ubuntu-linux-prod: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-test: Creation complete after 13s [id=i-02706717c3440723a]
aws_instance.i-suse-linux-test: Creating...
aws_instance.i-ubuntu-linux-test: Creation complete after 16s [id=i-0d2999aa319a90a3d]
aws_instance.i-centos-linux-test: Creation complete after 16s [id=i-03923fcf9d5881421]
aws_instance.i-suse-linux-prod: Creating...
aws_instance.i-ubuntu-linux-prod: Creation complete after 16s [id=i-00967725bc758f3ef]
aws_instance.i-redhat-linux-test: Creation complete after 16s [id=i-02a705327fb0acb61]
aws_instance.i-windows-prod: Creation complete after 16s [id=i-09c3fcc90491ef2cf]
aws_instance.i-redhat-linux-prod: Creation complete after 16s [id=i-0161726dfe1ed890b]
aws_instance.i-windows-test: Creation complete after 16s [id=i-02f567d11d32444fd]
aws_instance.i-amazon-linux-prod: Still creating... [20s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [20s elapsed]
aws_instance.i-suse-linux-test: Still creating... [10s elapsed]
aws_instance.i-amazon-linux-prod: Creation complete after 23s [id=i-0b799879e77ce8b33]
aws_instance.i-suse-linux-prod: Still creating... [10s elapsed]
aws_instance.i-centos-linux-prod: Still creating... [30s elapsed]
aws_instance.i-centos-linux-prod: Creation complete after 32s [id=i-0482d958849f86483]
aws_instance.i-suse-linux-test: Still creating... [20s elapsed]
aws_instance.i-suse-linux-prod: Still creating... [20s elapsed]
aws_instance.i-suse-linux-test: Still creating... [30s elapsed]
aws_instance.i-suse-linux-test: Creation complete after 32s [id=i-0b35d559853f9f0d6]
aws_instance.i-suse-linux-prod: Still creating... [30s elapsed]
aws_instance.i-suse-linux-prod: Creation complete after 33s [id=i-062df970b894a23da]

Apply complete! Resources: 19 added, 0 changed, 0 destroyed.

Once you’re done with your tests, simply destroy the whole stuff by usinf the “destroy” command:

dwe@dwe:~/Documents/aws/ssm_demo$ terraform destroy
...
Plan: 0 to add, 0 to change, 19 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes
...
aws_security_group.dwe-sg-rdp: Destroying... [id=sg-0764508e3a5234393]
aws_security_group.dwe-sg-rdp: Destruction complete after 1s
aws_security_group.dwe-sg-ssh: Destruction complete after 1s
aws_subnet.dwe-subnet: Destruction complete after 1s
aws_vpc.dwe-vpc: Destroying... [id=vpc-026cdc481d5365074]
aws_vpc.dwe-vpc: Destruction complete after 0s

Destroy complete! Resources: 19 destroyed.

Quite easy, always reproducible and fast.

Cet article A simple Terraform script to create an AWS EC2 playground est apparu en premier sur Blog dbi services.

odacli create-database error DCS-10802: Insufficient disk space on file system: database

Yann Neuhaus - Mon, 2019-11-11 02:27
Introduction

I was reimaging an X6-2M ODA after various patching troubles, and everything was fine. After several databases created, the next ones could no more be created.

DCS-10802: Insufficient disk space on file system: database. Expected free space (MB): {1}, available space (MB): {2}

I’ve spent some time on it, and finally found the cause of the problem. And the solution.

Context

After successfully reimaged an X6-2M ODA with 18.5, and applied the patch for the firmwares, ILOM and disks, I was creating all the databases with odacli with the following commands:


odacli create-database -hm XyXyXyXyXyXy --dbstorage ACFS --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json
odacli create-database ...

Each database is created with the smallest shape odb1s, as I later fine tune each instance according to my needs.

After the 8th or 9th database created, the next ones ended with a failure:

odacli describe-job -i "2ad9f7e8-331d-4a82-bb7b-c88ddad1cdf8"

Job details
----------------------------------------------------------------
ID: 2ad9f7e8-331d-4a82-bb7b-c88ddad1cdf8
Description: Database service creation with db name: HPMVRN
Status: Failure
Created: November 8, 2019 1:33:23 PM CET
Message: DCS-10802:Insufficient disk space on file system: database. Expected free space (MB): {1}, available space (MB): {2}

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:58 PM CET Failure
Database Service creation November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:58 PM CET Failure
Setting up ssh equivalance November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:23 PM CET Success
Creating volume datHPMVRN November 8, 2019 1:33:23 PM CET November 8, 2019 1:33:42 PM CET Success
Creating ACFS filesystem for DATA November 8, 2019 1:33:42 PM CET November 8, 2019 1:33:57 PM CET Success
Database Service creation November 8, 2019 1:33:57 PM CET November 8, 2019 1:33:58 PM CET Failure
Database Creation November 8, 2019 1:33:57 PM CET November 8, 2019 1:33:58 PM CET Failure

Analysis

Error seems obvious: Insufficient disk space. Let’s check the disk space:


df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot
30G 5.6G 23G 20% /
tmpfs 126G 1.1G 125G 1% /dev/shm
/dev/sda1 477M 41M 411M 9% /boot
/dev/mapper/VolGroupSys-LogVolOpt
109G 85G 19G 82% /opt
/dev/mapper/VolGroupSys-LogVolU01
148G 31G 110G 22% /u01
/dev/asm/datdbtest-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/DBTEST
/dev/asm/datgotst-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GOTST
/dev/asm/datgeval-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GEVAL
/dev/asm/commonstore-8
5.0G 391M 4.7G 8% /opt/oracle/dcs/commonstore
/dev/asm/datsmval-8
100G 44G 57G 44% /u02/app/oracle/oradata/SMVAL
/dev/asm/datvival-8
100G 34G 67G 34% /u02/app/oracle/oradata/VIVAL
/dev/asm/datvjval-8
100G 56G 45G 56% /u02/app/oracle/oradata/VJVAL
/dev/asm/dump-8 200G 132G 69G 66% /dpdumplocal
/dev/asm/dataoval-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/AOVAL
/dev/asm/datgrtst-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/GRTST
/dev/asm/datgival-8
100G 7.8G 93G 8% /u02/app/oracle/oradata/GIVAL
/dev/asm/reco-329 74G 56G 19G 76% /u03/app/oracle
/dev/asm/datgetst-8 100G 2.2G 98G 3% /u02/app/oracle/oradata/GETST
/dev/asm/datgftst-8
100G 30G 71G 30% /u02/app/oracle/oradata/GFTST
/dev/asm/datgctst-8
100G 2.2G 98G 3% /u02/app/oracle/oradata/GCTST
/dev/asm/dathpmvrn-8 100G 448M 100G 1% /u02/app/oracle/oradata/HPMVRN

No filesystem is full. And create-database managed to create the acfs volume for data successfully. Let’s try to put something in it:


cp /the_path/the_big_file /u02/app/oracle/oradata/HPMVRN/

No problem with this acfs volume.

Let’s try to create the database into ASM:


odacli list-databases | tail -n 1

bed7f9a0-e108-4423-8b2c-d7c33c795e87 HPMVRN Si 12.1.0.2 false Oltp Odb1s Acfs Failed 0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb

odacli delete-database -i "bed7f9a0-e108-4423-8b2c-d7c33c795e87"

odacli create-database -hm XyXyXyXyXyXy --dbstorage ASM --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json

No problem here, really seems to be related to acfs.

Where does the create-database also need free space? For sure, in the RECOvery area filesystem, created with the very first database:


Filesystem Size Used Avail Use% Mounted on
/dev/asm/reco-329 74G 56G 19G 76% /u03/app/oracle

Let’s create a file in this filesystem:

cp /the_path/the_big_file /u03/app/oracle/

No problem.

Quite strange, and as the database is not yet created, there is no alert_HPMVRN.log to look for the error…

Maybe the RECOvery area filesystem is not big enough for odacli. acfs concept means autoextensible filesystems, but as all my Fast Recovery Areas of all my databases probably won’t fit in the allocated 74GB, odacli may fail. How to extend the acfs RECOvery filesystem? Just put enough dummy files in it, and then remove them.


cd /u03/app/oracle
cp /the_path/the_big_file tmpfile1
cp tmpfile1 tmpfile2
cp tmpfile1 tmpfile3
cp tmpfile1 tmpfile4
cp tmpfile1 tmpfile5
cp tmpfile1 tmpfile6
rm -rf tmp*

df -h /u03/app/oracle
Filesystem Size Used Avail Use% Mounted on
/dev/asm/reco-329 88G 59G 30G 67% /u03/app/oracle

RECOvery filesystem is slightly bigger, let’s retry the database creation:


odacli delete-database -i "985b1d37-6f84-4d64-884f-3a429c195a5d"

odacli create-database -hm XyXyXyXyXyXy --dbstorage ACFS --characterset WE8MSWIN1252 --databaseUniqueName HPMVRN --dbhomeid '0704ef7c-0cb9-4525-8edb-8d70b8f7ddfb' --dblanguage AMERICAN --dbname HPMVRN --dbshape odb1s --dbterritory AMERICA --no-cdb --no-dbconsole --json

odacli describe-job -i "d5022d8b-9ddb-4f93-a84a-36477657794f"

Job details
----------------------------------------------------------------
ID: d5022d8b-9ddb-4f93-a84a-36477657794f
Description: Database service creation with db name: HPMVRN
Status: Success
Created: November 8, 2019 1:47:28 PM CET
Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance November 8, 2019 1:47:30 PM CET November 8, 2019 1:47:40 PM CET Success
Creating volume datHPMVRN November 8, 2019 1:47:40 PM CET November 8, 2019 1:47:59 PM CET Success
Creating ACFS filesystem for DATA November 8, 2019 1:47:59 PM CET November 8, 2019 1:48:14 PM CET Success
Database Service creation November 8, 2019 1:48:14 PM CET November 8, 2019 1:54:38 PM CET Success
Database Creation November 8, 2019 1:48:14 PM CET November 8, 2019 1:53:02 PM CET Success
Change permission for xdb wallet files November 8, 2019 1:53:02 PM CET November 8, 2019 1:53:02 PM CET Success
Place SnapshotCtrlFile in sharedLoc November 8, 2019 1:53:02 PM CET November 8, 2019 1:53:04 PM CET Success
SqlPatch upgrade November 8, 2019 1:54:02 PM CET November 8, 2019 1:54:35 PM CET Success
updating the Database version November 8, 2019 1:54:35 PM CET November 8, 2019 1:54:38 PM CET Success
create Users tablespace November 8, 2019 1:54:38 PM CET November 8, 2019 1:54:40 PM CET Success

This is it. I was then able to create the next databases without any problem.

Final words

Keep in mind that all your Fast Recovery Areas should fit in your dedicated filesystem. To make sure that you will not encounter problems, resize you RECOvery filesystem to the sum of all your target FRA with acfsutil:

acfsutil size 500G –d /dev/asm/reco-329 /u03/app/oracle

Autoextensible filesystem never ensures you it will succeed in case of extension.

Or simply go for ASM instead of acfs on your ODA, if you do not need acfs features like snapshots. ASM is simplier and more efficient because it does not provision disk space for each database like acfs.

Cet article odacli create-database error DCS-10802: Insufficient disk space on file system: database est apparu en premier sur Blog dbi services.

Exadata X8M - World's Fastest Database Machine

Syed Jaffar - Sun, 2019-11-10 08:11
Exadata X8M has launched during OOW 2019, and termed as world's fastest Database Machine. Let's walkthrough about new additions that has made X8M as world's fastest Database Machine.

An Exadata X8M is the industry's first DB machine integrated with Intel Optane DC persistent memory (read more about this) and 100 gigabit RDMA over converged ethernet. This will be dramatically improves the performance for all sort of workloads, such as OLTP, analytics, IoT, high frequency trading etc by eliminating the storage access bottlenecks. Persistent memory with RoCE networking can reduces IO latency significantly and boosts the performance by 2.5X.

It uses RDMA directly from the DB to access persistent memory in smart storage servers, eliminating the entire OS, IO and network software stacks. Which will deliver the higher throughput with lower latency. Also, frees CPU resources on storage server to execute more smart scan queries for analytic workloads.

Its in-memory performance with all advantages of shared storage benefits the Analytics and OLTP  workloads. Direct database access to shared persistent memory accelerates is the real game changer for application that demands large amounts of data.

For more details, read the link below:

https://www.oracle.com/corporate/pressrelease/oow19-oracle-unleashes-worlds-fastest-database-machine-091619.html




    Oracle Cloud: Feature Set

    Dietrich Schroff - Sun, 2019-11-10 04:34
    For all readers, who want to get a short overview to the services which can be configured inside Oracle Cloud, here a walkthrough the menus.
    The Main menu shows the following items:
    • Core infrastructure
      • Compute
      • Block Storage
      • Object Storage
      • File Storage
      • Networking 
    • Database
      • Bare Metal, VM, and Exadata
      • Autonomous Data Warehouse
      • Autonomous Transaction Processing
      • Data Sale
      • Exadata Cloud Connector
    • Solution and Patterns
      • Analytics
      • Resource Manager
      • Email Delivery
      • Application Integration
      • Monitoring
      • Developer Services Marketplace
    • Governance and Administration
      • Account Management
      • Identity
    Inside the Compute item the following services can be found:
    • Compute
      • Instances
      • Dedicated Virtual Hosts
      • Instance Configurations
      • Cluster Networks
      • Autoscaling Configurations
      • Custom Images
      • Boot Volumes
      • Boot Volume Backups
    • Block Storage
      • Block Volumes
      • Block Volume Backups
      • Volumes Groups
      • Volume Group Backups
    • Object Storage
      • Object Storage
      • Data Transfer
    • File Storage
      • File Systems
      • Mount Targets
    • Networking
      • Virtual Cloud Networks
      • Dynamic Routing Gateways
      • Customer-Premises Equipments
      • IPSec Connections
      • Load Balancers
      • FastConnect
      • Public IPs
      • DNS Zone Management
      • Traffic Management Steering Policies

    • Administration
      • Tenancy Details
      • Announcements


    The menu Database has no subitems.

    For most of these services you can find documentation here:
    https://docs.cloud.oracle.com/iaas/Content/home.htm




    Nested Tables

    Jonathan Lewis - Sat, 2019-11-09 05:28

    This note is a little side effect of answering a question about the total space associated with a table, including its indexes, LOB columns and nested tables. The first suggestion I published failed to take account of the fact that nested tables can contain their own nested tables so I had to modify the code.

    The basic change was easy – in the original code I had joined to the view dba_nested_tables to translate between a table_name and its parent_table_name. The only change I needed to make was to replace the reference to the view with a different view that joined a table_name to its “oldest ancestor”. To achieve this I had to do two things: first, create a table with multiple levels of nesting, then create a suitable view definition.  For reference – because it may help somebody – I’ve published the two stages here.

    A revolting nested table:

    The following code creates three table types, but the second table type

    
    rem
    rem     Script:         nested_tables.sql
    rem     Author:         Jonathan Lewis
    rem     Dated:          Nov 2019
    rem
    rem     Last tested 
    rem             19.3.0.0
    rem             12.2.0.1
    rem
    
    create or replace type jpl_item3 as object (v1 varchar2(10), n3 number);
    /
    
    create or replace type jpl_tab3_type as table of jpl_item3;
    /
    
    create or replace type jpl_item2 as object(v2 varchar2(10), jpl3 jpl_tab3_type);
    /
    
    create or replace type jpl_tab2_type as table of jpl_item2;
    /
    
    create or replace type jpl_item1 as object(v3 varchar2(10), jpl2 jpl_tab2_type);
    /
    
    create or replace type jpl_tab1_type as table of jpl_item1;
    /
    
    create table demo_nest_2 (
            id      number  constraint d2_pk primary key,
            jpl1    jpl_tab1_type
    )
    segment creation immediate
    nested table jpl1 store as t_jpl1
            (
            nested table jpl2  store as t_jpl2
                    (
                    nested table jpl3 store as t_jpl3 
                    return as locator
                    )
            return as locator
            )
    return as locator
    /
    
    

    I’ve never seen nested tables manually created in a production system though I believe they are likely to appear (along with varrays and LOBs) as a side effect of some XML or JSON mechanisms, but many years ago (in Practical Oracle 8i, possibly) I discussed the pros and cons of returning them “by value” or (as I have here) “by reference”. As you can see, you need to exercise some care with brackets and locating the text as you try to refine multiple levels of nesting.

    Tne Ancestor View

    We’ll take this in three steps – first, report from user_nested_tables. (The final script for reporting space used dba_nested_tables with a predicate available on the owner column, but I don’t give myself DBA privileges while writing quick and dirty bits of SQL.).

    
    select 
            parent_table_name, parent_table_column, table_name, 
            ltrim(storage_spec) storage_spec,       -- the DBA view definition includes lpad( , 30) !!
            ltrim(return_type ) return_type         -- the DBA view definition includes lpad( , 20) !!
    from 
            user_nested_tables
    order by
            parent_table_name, parent_table_column
    /
    
    PARENT_TABLE_NAME    PARENT_TABLE_COLUMN  TABLE_NAME           STORAGE_SPEC         RETURN_TYPE
    -------------------- -------------------- -------------------- -------------------- --------------------
    DEMO_NEST_2          JPL1                 T_JPL1               USER_SPECIFIED       LOCATOR
    T_JPL1               JPL2                 T_JPL2               USER_SPECIFIED       LOCATOR
    T_JPL2               JPL3                 T_JPL3               DEFAULT              LOCATOR
    
    

    You’ll notice the odd ltrim() – I have no idea why the view defines these columns to be left-padded the way they are, possibly it dates back to the time when something like cast(… as vachar2(30)) wasn’t a possible option.

    Next a simple “connect by” query what uses the above list in a materialize “with subquery” (CTE):

    
    with my_nested_tables as (
    select
            /*+ materialize */
            parent_table_name, table_name
    from
            user_nested_tables
    )
    select
            parent_table, child_table, pathname
    from    (
            select
                    level,
                    sys_connect_by_path(table_name, '/')    pathname,
                    connect_by_root parent_table_name parent_table,
                    table_name child_table
            from
                    my_nested_tables
            connect by
                    parent_table_name = prior table_name
            )
    order by
            parent_table, child_table, pathname
    /
    
    PARENT_TABLE         CHILD_TABLE          PATHNAME
    -------------------- -------------------- ----------------------------------------
    DEMO_NEST_2          T_JPL1               /T_JPL1
    DEMO_NEST_2          T_JPL2               /T_JPL1/T_JPL2
    DEMO_NEST_2          T_JPL3               /T_JPL1/T_JPL2/T_JPL3
    T_JPL1               T_JPL2               /T_JPL2
    T_JPL1               T_JPL3               /T_JPL2/T_JPL3
    T_JPL2               T_JPL3               /T_JPL3
    
    
    

    As required this shows me demo_nest_2 as the owning ancestor of t_jpl1, t_jpl2 and t_jpl3. Unfortunately it has also produced three rows that we don’t want to see in our final space-summing code. But it’s easy enough to get rid of those – the only rows we want are the rows with a parent_table that doesn’t appear as a child_table:

    
    with my_nested_tables as (
    select
            /*+ materialize */
            parent_table_name, table_name
    from
            user_nested_tables
    )
    select  parent_table, child_table, pathname
    from    (
            select
                    level,
                    sys_connect_by_path(table_name, '/')    pathname,
                    connect_by_root parent_table_name parent_table,
                    table_name child_table
            from
                    my_nested_tables
            connect by
                    parent_table_name = prior table_name
            )
    where   (parent_table) not in (
                    select table_name
                    from   my_nested_tables
            )
    order by
            parent_table, child_table, pathname
    /
    
    PARENT_TABLE         CHILD_TABLE          PATHNAME
    -------------------- -------------------- ----------------------------------------
    DEMO_NEST_2          T_JPL1               /T_JPL1
    DEMO_NEST_2          T_JPL2               /T_JPL1/T_JPL2
    DEMO_NEST_2          T_JPL3               /T_JPL1/T_JPL2/T_JPL3
    
    3 rows selected.
    
    
    

    And there’s the result we wanted to see. A first simple corroboration of the fact is that the output corresponds with the “NESTED TABLE” segments reported by user_segments.

    Of course, having written a query that gets the right result from a table definition we used to help us define the query in the first place we now ought to create a few more tables with different structures to see if the query continues to give the right results in more complex cases.

    What happens, for example, if I create a table with two columns of nested tables, and one of the nested table typed also contained two nested tables ? What happens if the base table is an index organized table ?

    It’s easy to do the second test – just add “organization index” immediately after “segment creation immediate” in the table creation statement. The correct results drop out.

    As for the first test – here’s the SQL to create one example (and the query still gets the right answers – even if you change the table to be index organized).

    
    drop type jpl_tab1_type;
    drop type jpl_item1;
    
    drop type jpl_tab2_type;
    drop type jpl_item2;
    
    drop type jpl_tab3_type;
    drop type jpl_item3;
    
    purge recyclebin;
    
    create or replace type jpl_item3 as object (v1 varchar2(10), n3 number);
    /
    
    create or replace type jpl_tab3_type as table of jpl_item3;
    /
    
    create or replace type jpl_item2 as object(v2 varchar2(10), jpl3 jpl_tab3_type, jpl3x jpl_tab3_type);
    /
    
    create or replace type jpl_tab2_type as table of jpl_item2;
    /
    
    create or replace type jpl_item1 as object(v3 varchar2(10), jpl2 jpl_tab2_type)
    /
    
    create or replace type jpl_tab1_type as table of jpl_item1;
    /
    
    create table demo_nest_3 (
            id      number  constraint d2_pk primary key,
            jpl1    jpl_tab1_type,
            jpl1a   jpl_tab1_type
    )
    segment creation immediate
    -- organization index
    nested table jpl1 store as t_jpl1
            (
            nested table jpl2  store as t_jpl2
                    (
                    nested table jpl3 store as t_jpl3 
                    return as locator
                    nested table jpl3x store as t_jpl3x 
                    return as locator
                    )
            return as locator
            )
    return as locator
    nested table jpl1a store as t_jpl1a
            (
            nested table jpl2  store as t_jpl2a
                    (
                    nested table jpl3 store as t_jpl3a
                    return as locator
                    nested table jpl3x store as t_jpl3xa
                    return as locator
                    )
            return as locator
            )
    return as locator
    /
    
    

    All that remains now is to modify the code to use the equivalent DBA views, with the addition of the owner column, then slot the resulting code into the original query in place of the simple references to dba_nested_tables. If you go to the original posting you’ll see that I’ve done this by wrapping the code into a CTE so that the only changes to the rest of the code appear as two (flagged) changes where the CTE is then used.

    Oracle Database Views and Tables for Oracle GoldenGate

    DBASolved - Fri, 2019-11-08 16:40

    Oracle GoldenGate for Oracle over the last few releases have been moving towards an integrated architecture.  This means that there is more views and tables within the Oracle Database that support Oracle GoldenGate.  You can quickly find these views and tables by using a where clause with a filter for GoldenGate: select * from all_views […]

    The post Oracle Database Views and Tables for Oracle GoldenGate appeared first on DBASolved.

    Categories: DBA Blogs

    Avoiding patching madness by using AWS SSM

    Yann Neuhaus - Fri, 2019-11-08 13:15

    As we have more and more customers either currently moving to AWS or already moved much, if not even all, of their workloads to AWS, one question pops up quite frequently: Now that we have so many EC2 instances up and running how do we regularly patch them? There are many tools around for patching operating systems, such as Microsoft SSCM, Red Hat Satellite or SUSE Manager just to name a few. There are many other as well but most of them have an important drawback: Either they can only patch Windows systems or they can only patch Linux systems (and even here usually only a subset of the distributions currently in use). This is where AWS System Manager becomes very handy as you can patch most of the commonly used operating system using one single tool. You can even patch your on-prem instances if your network setup includes the on-prem instances as well. In this post I’ll share what we did for one of our customers to remove the pain of either using multiple tools to patch all the operating systems or to manually patch all the systems and then (manually) document what has been done and what was the result.

    Managed Instances

    Once you start with AWS SSM there are a few key concepts you need to understand. The first one is the so-called “Managed Instance”. For an instance being managed it must fulfill two requirements:

    1. The AWS agent must be deployed on the instance
    2. An IAM role with the correct permission must be assigned to the instance

    For the AWS agent it is quite easy if you are deployed any of these because the agent will be there be default:

    • Windows Server 2003-2012 R2 AMIs published in November 2016 or later
    • Amazon Linux
    • Amazon Linux 2
    • Ubuntu Server 16.04
    • Ubuntu Server 18.04

    If you are running something else the agent needs to be installed manually as described here for Windows and here for Linux. For on-prem instances it is straight forward as well (Windows here, Linux here) but you need to create Managed Instance Actications as well which is not in the scope of this post.

    For the IAM role we’ll first look at what I have currently deployed:

    This are two Windows, two Red Hat, two Ubuntu, two Amazon Linux, two CentOS and two SUSE instances, one for Prod and one for Test each. The “Prod” and “Test” naming becomes important later, just ignore it for now. If we take a look at how many managed instances we have right now in System Manager we will see zero, even that the Windows, Ubuntu and Amazon Linux instances have the SSM agent deployed automatically (otherwise we would see a list of managed instances here):

    We will use the “Quick Setup” here to setup the IAM role and instance profiles:

    For the scope of this post we will go with defaults suggested by AWS:

    We want all the instances:


    The last screen shows an overview of what is currently happening, basically collecting all the information that will be stored in the AWS SSM inventory. Before you’ll see any instances in “Managed Instances” you will need to wait until the pending inventory actions have been completed:

    Once that is done all the managed instances should be there (notice that I manually deployed the ASM SSM agent on the SUSE, CentOS and RedHat instances, otherwise they would not show up here):

    Patch baselines

    Once all instances are “managed” there is the next concept we need to understand: Patch baselines. Patch baselines basically define which set of patches (classification, severity, …) shall get applied to which product (e.g. Red Hat 7, Red Hat 8, …). As we have six different operating system we need at least six patch baselines. In reality you’ll probably even need twelve because you will have production and test systems. Usually you want to have the same patches applied to production as you have applied to test some days or weeks before and therefore we need two patch baselines per operating system flavor. Patch baselines are listed in SSM under “Patch manager”:

    The following screenshots show how I created the patch baseline for the Red Hat test systems:

    For the Red Hat production systems the patch baseline is exactly the same but I’ve put an approval delay of 14 days:

    Why the delay? Suppose you patch your test systems every first Tuesday a month and then you patch the production systems 14 days later. As you want to have the same patches on production and test, the production patches get an approval delay of 14 days. This is how you can make sure that your systems are on the same level even if production is patched later than test. One point consider is, that this does not apply to the Ubuntu systems as there is no approval delay for Ubuntu patches:

    Once all the base lines have been created it looks like this:

    The “Patch group” tag

    The next key concept that comes into the game are tags. No matter what you well be doing in the cloud, without tags you’re lost. There is just no way around tags if you want to automate processes in the cloud. When it comes the AWS SSM there is the “Patch Group” tag and this tag needs to be assigned to each instance you want to have patched automatically. Let’s assume you want to have the test instances patched every first Saturday each month and the production instances every 3rd Saturday each month ( 6 a.m. in the morning for both). In addition all the Linux hosts should be patched before the Windows hosts, because the Linux hosts run the databases and the Windows hosts run the application servers or the application (of for whatever reason there is a dependency between the hosts and you need to follow a defined order for patching).

    What we did to fulfill this requirement is to populate the “Patch Group” tag like this:

    1. All test instances get the value: patch_test_0600_batch_1 or patch_test_0600_batch_2
    2. All production instances get the value: patch_prod_0600_batch_1 or patch_prod_0600_batch_2

    The “*batch” values builds the dependency between the Linux and the Windows host. Taking one test Linux and one test Windows as an example the tag would look like this:

    I’ve assigned the tags to all the remaining instances accordingly:

    1. i-amazon-linux-test: patch_test_0600_batch_1
    2. i-centos-linux-test: patch_test_0600_batch_1
    3. i-redhat-linux-test: patch_test_0600_batch_1
    4. i-suse-linux-test: patch_test_0600_batch_1
    5. i-ubuntu-linux-test: patch_test_0600_batch_1
    6. i-windows-test: patch_test_0600_batch_2

    Same procedure for the production instances but using the prod_* tags of course. Only assigning these tags to the instances is not enough though. We need to assign them to the patch baselines as well so the baseline gets selected when a patch is supposed to run on the systems. Assigning a “Patch Group” to a patch baselines basically links the patch baseline to instances with the same tag value per operating system. If you don’t do that you will always get the default patch baseline for an operating system and you have no chance of applying the same patches to test and prod because you can not have different approval delays.

    This needs to be done for all the patch baselines we created above. The patch_test_0600_batch* tags get attached to the baselines for test and the patch_prod_0600_batch* tags get attached to the baselines for production.

    Maintenance windows

    The last building block are the maintenance windows. They define when an action will executed on which systems. You can not only use them for patching but for many other tasks as well (e.g. running a PowerShell script on Windows hosts, running a shell script on Linux hosts, running an Ansible playbook and much more).

    As we want to schedule patching for all the test instances the first Saturday of a month and all production systems the third Saturday of the month we’ll need to two maintenance windows, one for test and for production. We’ll start with the one for test:

    For the scheduling you should go for a cron expression:

    “6F” means the first Saturday (0 is Sunday) each month. Note that if you skip the “Schedule timezone” timezone you will need to provide UTC time in the cron expression. If you do specify your timezone, AWS is automatically calculating the correct UTC time for you, as you can see once the maintenance window is created:

    Now we have the scheduling but the maintenance window has no idea what to run and against what to run it. What to run is specified by adding targets:

    You should definitely give a name to the target so you can easily identify the target afterwards:

    By specifying the “Patch Group” instance tag with our values for the test systems that should run in the first batch we have all the Linux systems registered with this maintenance window. The second target will be the Windows systems that should be patched once the Linux systems have been done:

    All test systems registered:

    Now that we defined where to run we need to define what to run and this is specified by the “Run command”:

    In our case the document is “AWS-RunPatchBaseline” but it could be any of the documents listed as mentioned earlier:

    The task priority is 1 (because we want this to be executed first) and the target is “patch_test_0600_batch_1” (this is why the name for target is important. You will probably not remember the unique string that was generated when you registered the target):

    The rate control option allows you to control concurrenry and error thresholds, for the IAM role go with the defaults:

    We highly recommend to store the output in a S3 bucket, otherwise the output of the run command will be truncated after 2000 charcters. The parameter should be “Install” (you could also go for “Scan” if you only want to scan for available patches):

    Do exactly the same for the patch_test_0600_batch_2 group, with priority 2:

    Both run commands are now registered:

    And that’s it. Repeat the same for the production machines and all your instances are scheduled for patching. Once the maintenance window executed you can get the logs in the “Run command” secion:



    Here you have the link to the log in S3:

    Conclusion

    When you start using AWS SSM it is a bit hard to understand all the relations between patch baselines, the patch group tag, target, run commands and maintenance windows. Once you got that it is quite easy to schedule patching of your whole infratructure (even on prem) when you run operation systems AWS has patch baselines for. The logs should definitely go to S3 so you have the full output available.

    Cet article Avoiding patching madness by using AWS SSM est apparu en premier sur Blog dbi services.

    Clustered Indexes

    Tom Kyte - Fri, 2019-11-08 02:48
    Is clustered Index is faster than Non-clustered Index?
    Categories: DBA Blogs

    Platform list on whicb JAVA_JIT_ENABLED paramter can be enbaled

    Tom Kyte - Fri, 2019-11-08 02:48
    <b>Question:</b> As my application is using some JAVA function as UDF using <b>loadjava</b>. So, developers are suggesting to set <b>JAVA_JIT_ENABLED</b> parameter as <b>true</b>. I have read (https://docs.oracle.com/cd/B28359_01/server.111/b28320/in...
    Categories: DBA Blogs

    How can i extract data from a URL string in SQL?

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi TOM, i am trying to extract some data from a URL and running into some issues. The URL's will look something like this: http://192.168.201.242:8000/express_get?time=$startDate&from=$phoneNumber&to=$CAMPAIGN&Id_agent=$idExternalUser&spent=$durat...
    Categories: DBA Blogs

    JAVA CLASSPATH Specificaiton

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi, We have an Oracle 10g database running java on the database as java stored procedures. It is hosted on an IBM AIX frame. There are both 64bit and 32bit jvms installed. How do you specify the java classpath for java running in the database? W...
    Categories: DBA Blogs

    Cursors

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi Tom, The Oracle users in the world should be thankful to you for all your help all the way .Many thanks for that. We are learning many things about Oracle that we could not learn by reading several Oracle books (other than your books) though ...
    Categories: DBA Blogs

    Alternate to pro*c batch programming

    Tom Kyte - Fri, 2019-11-08 02:48
    Dear Team, We would like to have suggestion on the topic related to alternate programming for the existing batch system in pro*c with oracle 12c database. Currently we have many number of batch jobs which are in pro*c (by interacting with oracl...
    Categories: DBA Blogs

    Create materialized view with ENABLE QUERY REWRITE for SDO_RELATE getting ORA-30373

    Tom Kyte - Fri, 2019-11-08 02:48
    I write a query like this: <code>CREATE MATERIALIZED VIEW MV REFRESH FORCE ON DEMAND start with (sysdate) next (sysdate+1/1440) ENABLE QUERY REWRITE AS SELECT O.ID DIREK_ID, MAX(LK.ADI) ISLETME_GERILIMI FROM xxx O, yyy AA, SY...
    Categories: DBA Blogs

    Copy row value down until next non-null row

    Tom Kyte - Fri, 2019-11-08 02:48
    Hi, Is there a simple way to achieve the following output in SQL? Wanted to populate the null values with the Country name until there is a new country name. Thanks with data as ( select 'Afghanistan Afghani' as country_currency, N...
    Categories: DBA Blogs

    Problem with very large varchar2 sorting in oracle 10g r2? Sorry, I can't verify my problem in LiveSQL because I need to run it in Oracle 10g r2

    Tom Kyte - Fri, 2019-11-08 02:48
    I am very sorry to waste your time with something that is probably VERY SIMPLE, but IT IS something I cannot understand ... I must be obfuscated <code>select level, lpad(to_char(level), 1000) from dual connect by level<=20 order by 2;</code> ...
    Categories: DBA Blogs

    Change column to row

    Tom Kyte - Fri, 2019-11-08 02:48
    I have a data like below in a table <code>create table test ( sr varchar2(1),col1 number,col2 number,col3 number ) ; insert into test values ('a',1,2,3); insert into test values ('b',4,5,6); insert into test values ('c',7,8,9);</code> Want...
    Categories: DBA Blogs

    Connection pooling with PgBouncer

    Yann Neuhaus - Thu, 2019-11-07 09:38

    Some of you may know the case: As soon as the number of users grow, the number of resource problems increases. Have you ever thought about using a connection pooler? Too complex, too much administration effort? In this post I like to explain, how the connection pooler can help you with your memory, as well as showing you the simplicity of setup connection pooling with PgBouncer.

    Introduction

    By default PostgreSQL forks it’s main process to child processes for every new connection. In the course of time this can lead to more and more processes on the server. On one hand, this is pretty cool, because it can provide more stability and a nice view of resource utilization per connection. But if there are many short time connections, the disadvantages will predominate. The more connections you have, the more RAM will be used.
    The solution for that problem can be a connection pooler like PgBouncer, an opensource connection pooling middleware espacially designed for Postgres. It will act like a wrapper around the database connections. It has the internals for the connection between the database and the pool, but everything is hidden from the application that connects.

    Installation

    For the installation of pgBouncer you can decide between two possibilities
    1. using yum
    2. building from git (https://pgbouncer.github.io/install.html#building-from-git)

    To keep it simple, we go on with method 1.

    postgres@centos-mini:/u02/pgdata/11/PG1/ [PG1] sudo yum install pgbouncer
    Loaded plugins: fastestmirror
    Determining fastest mirrors
    epel/x86_64/metalink                                                                |  28 kB  00:00:00
     * base: pkg.adfinis-sygroup.ch
     * epel: pkg.adfinis-sygroup.ch
     * extras: pkg.adfinis-sygroup.ch
     * updates: pkg.adfinis-sygroup.ch
    base                                                                                | 3.6 kB  00:00:00
    epel                                                                                | 5.3 kB  00:00:00
    extras                                                                              | 2.9 kB  00:00:00
    ius                                                                                 | 1.3 kB  00:00:00
    pgdg10                                                                              | 3.6 kB  00:00:00
    pgdg11                                                                              | 3.6 kB  00:00:00
    updates                                                                             | 2.9 kB  00:00:00
    (1/10): base/7/x86_64/group_gz                                                      | 165 kB  00:00:06
    (2/10): extras/7/x86_64/primary_db                                                  | 153 kB  00:00:00
    (3/10): epel/x86_64/group_gz                                                        |  90 kB  00:00:06
    (4/10): epel/x86_64/primary_db                                                      | 6.9 MB  00:00:08
    (5/10): epel/x86_64/updateinfo                                                      | 1.0 MB  00:00:08
    (6/10): pgdg11/7/x86_64/primary_db                                                  | 337 kB  00:00:01
    (8/10): base/7/x86_64/primary_db                                                    | 6.0 MB  00:00:10
    (10/10): updates/7/x86_64/primary_db                                                | 2.8 MB  00:00:01
    (11/10): ius/x86_64/primary                                                         | 139 kB  00:00:06
    (12/10): pgdg10/7/x86_64/primary_db                                                 | 384 kB  00:00:06
    ius                                                                                              631/631
    Resolving Dependencies
    --> Running transaction check
    ---> Package pgbouncer.x86_64 0:1.12.0-1.rhel7 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===========================================================================================================================================================================================================
     Package                           Arch                                               Version                                                     Repository                                          Size
    ===========================================================================================================================================================================================================
    Installing:
     pgbouncer                         x86_64                                             1.12.0-1.rhel7                                              pgdg10                                             207 k
    
    Transaction Summary
    ===========================================================================================================================================================================================================
    Install  1 Package
    
    Total download size: 207 k
    Installed size: 477 k
    Is this ok [y/d/N]: y
    Downloading packages:
    pgbouncer-1.12.0-1.rhel7.x86_64.rpm                                                                                                                                                | 207 kB  00:00:06
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : pgbouncer-1.12.0-1.rhel7.x86_64                                                                                                                                                     1/1
      Verifying  : pgbouncer-1.12.0-1.rhel7.x86_64                                                                                                                                                     1/1
    
    Installed:
      pgbouncer.x86_64 0:1.12.0-1.rhel7
    
    Complete!
    
    Configuration pgbouncer.ini

    PgBouncer has one central congfiguration file called pgbouncer.ini, per default it is located under /etc/pgbouncer and it is used to configure the PgBouncer pool.
    You can define a lot of parameters in this file, most of them commented out by default and you can start with a very minimal configuration. It includes generic settings as logile, listen_addr, listen_port… as well as connectivity, console access control, admin_users and many, many more. The full list of parameters can be found in the pgbouncer documentation, which is really detailled and gives you a good overview.
    Lets have a look to a easy sample of pgbouncer.ini file

    cat /etc/pgbouncer/pgbouncer.ini
    [databases]
    bouncer= host=localhost dbname=bouncer
    
    [pgbouncer]
    listen_addr=127.0.0.1
    listen_port= 6432
    auth_type = md5
    auth_file= /etc/pgbouncer/userlist.txt
    admin_users=bounce
    pool_mode=session
    max_client_conn = 100
    default_pool_size = 20
    logfile = /etc/pgbouncer/log/pgbouncer.log
    pidfile = /etc/pgbouncer/pgbouncer.pid
    

    The ini-file has two sections, first the [database] section defines the alias(es) for the database(s). Just start a new line for every database. You can also define user and port etc. and afterwards the [pgbouncer] section, where the pool configuration is done. You can also start with a very minimal configuration.

    One of the most important parameters is pool_mode, which defines how a server connection can be reused. 3 modes can be defined:
    session: This is the default value: Connections are released back to the pool after disconnection.
    transaction: Releases the connection to the pool once a transaction finishes.
    statement: After a query finishes, the connection is released back to he pool.

    The other parameters in section pgbouncer shortly explained:
    listen_addr: List of addresses where to listen for TCP connection.
    listen_port: Listening port
    admin_users: Users from the auth_file which get access to a special pgbouncer database. This database provides performance-related information about PgBouncer.
    max_client_conn: This is the maximum number of client connections allowed. The default value is 100, but there is also a formula to calculate the value
    default_pool_size: The number of server connections allowed per user/database pair. The default value is 20.
    logfile: This one is self-explaining. The log file location.
    pidfile: The location of the PID file.
    auth_type and auth_file: This two belong together. Auth_type specifies how to authenticate users (pam|hba|md5) against PgBouncer and auth_file contains the usernames and passwords.

    userlist.txt

    As already mentioned this file has a very simple structure, username and password. You can either write the password in plain text or the MD5 has of the password. Of course it is not very secure to put the plain text password in here.

    cat /etc/pgbouncer/userlist.txt
    "bounce" "md51db1c086e81505132d1834e06d07420e"
    "postgres" "md53175bce1d3201d16594cebf9d7eb3f9d"
    
    Start PgBouncer

    Now all the configuration is done and PgBouncer can be started. It is possible to start PgBouncer on command line and you see the log output directly:

    postgres@centos-mini:/etc/pgbouncer/ [PG1] /bin/pgbouncer pgbouncer.ini
    2019-11-06 19:40:05.862 CET [13498] LOG kernel file descriptor limit: 1024 (hard: 4096); max_client_conn: 100, max expected fd use: 192
    2019-11-06 19:40:05.864 CET [13498] LOG listening on 127.0.0.1:16432
    2019-11-06 19:40:05.864 CET [13498] LOG listening on unix:/tmp/.s.PGSQL.16432
    2019-11-06 19:40:05.864 CET [13498] LOG process up: PgBouncer 1.12.0, libevent 2.0.21-stable (epoll), adns: libc-2.17, tls: OpenSSL 1.0.2k-fips  26 Jan 2017
    2019-11-06 19:41:05.868 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 0 B/s, xact 0 us, query 0 us, wait 0 us
    2019-11-06 19:41:39.325 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58648 login attempt: db=db1 user=user1 tls=no
    2019-11-06 19:41:39.326 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58648 closing because: client unexpected eof (age=0s)
    2019-11-06 19:41:47.577 CET [13498] LOG C-0x18da360: db1/user1@127.0.0.1:58652 login attempt: db=db1 user=user1 tls=no
    2019-11-06 19:41:47.579 CET [13498] LOG S-0x18e0c30: db1/user1@[::1]:5432 new connection to server (from [::1]:37654)
    2019-11-06 19:42:05.869 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 0 B/s, xact 0 us, query 0 us, wait 157 us
    2019-11-06 19:43:05.872 CET [13498] LOG stats: 0 xacts/s, 0 queries/s, in 0 B/s, out 2 B/s, xact 1522 us, query 1522 us, wait 0 us
    

    Furthermore it is possible to create a service which starts automatically in the background after every reboot:

    cat /etc/systemd/system/pgbouncer.service
    [Unit]
    Description=A lightweight connection pooler for PostgreSQL
    After=syslog.target
    After=network.target
    
    [Service]
    Type=simple
    
    User=postgres
    Group=postgres
    
    # Path to the init file
    Environment=BOUNCERCONF=/etc/pgbouncer/pgbouncer.ini
    
    PIDFile=/var/run/pgbouncer/pgbouncer.pid
    
    # Where to send early-startup messages from the server
    # This is normally controlled by the global default set by systemd
    # StandardOutput=syslog
    
    ExecStart=/bin/pgbouncer ${BOUNCERCONF}
    ExecReload=/bin/kill -HUP $MAINPID
    KillSignal=SIGINT
    
    # Give a reasonable amount of time for the server to start up/shut down
    TimeoutSec=300
    
    [Install]
    WantedBy=multi-user.target
    postgres@centos-mini:/etc/ [PG1] sudo systemctl start pgbouncer
    postgres@centos-mini:/etc/ [PG1] sudo systemctl status pgbouncer
    ● pgbouncer.service - A lightweight connection pooler for PostgreSQL
       Loaded: loaded (/etc/systemd/system/pgbouncer.service; disabled; vendor preset: disabled)
       Active: active (running) since Thu 2019-11-07 15:17:09 CET; 4s ago
     Main PID: 17298 (pgbouncer)
       CGroup: /system.slice/pgbouncer.service
               └─17298 /bin/pgbouncer /etc/pgbouncer/pgbouncer.ini
    
    Nov 07 15:17:09 centos-mini systemd[1]: Started A lightweight connection pooler for PostgreSQL.
    Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG kernel file descriptor limit: 1024 (hard: 4096); max_client_conn: 100, max expected fd use: 172
    Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG listening on 127.0.0.1:6432
    Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG listening on unix:/tmp/.s.PGSQL.6432
    Nov 07 15:17:09 centos-mini pgbouncer[17298]: 2019-11-07 15:17:09.127 CET [17298] LOG process up: PgBouncer 1.12.0, libevent 2.0.21-stable (epoll), adns: libc-2.17, tls: OpenSSL 1.0.2k-fips  26 Jan 2017
    15:17:13 postgres@centos-mini:/etc/ [PG1]
    

    PgBouncer is running now and you can connect to the database using PgBouncer.

    postgres@centos-mini:/etc/ [PG1] psql -U bounce -p 6432 -h localhost bouncer
    Password for user bounce:
    psql (11.4 dbi services build)
    Type "help" for help.
    bouncer=>
    bouncer=>
    
    Monitoring

    I already mentioned the admin users before and want to explain them a little bit more detailed now.
    PgBouncer allows users with admin rights to connect to the virtual database “pgbouncer”. You can use this database to see who is connecting, how many active connections are in a pool and of course many more. As a good starting point, use “SHOW HELP” as soon as you are connected.

    postgres@centos-mini:/etc/ [PG1] psql -U bounce -p 6432 -h localhost pgbouncer
    Password for user bounce:
    psql (11.4 dbi services build, server 1.12.0/bouncer)
    Type "help" for help.
    pgbouncer=# SHOW HELP;
    NOTICE:  Console usage
    DETAIL:
            SHOW HELP|CONFIG|DATABASES|POOLS|CLIENTS|SERVERS|USERS|VERSION
            SHOW FDS|SOCKETS|ACTIVE_SOCKETS|LISTS|MEM
            SHOW DNS_HOSTS|DNS_ZONES
            SHOW STATS|STATS_TOTALS|STATS_AVERAGES|TOTALS
            SET key = arg
            RELOAD
            PAUSE []
            RESUME []
            DISABLE 
            ENABLE 
            RECONNECT []
            KILL 
            SUSPEND
            SHUTDOWN
    SHOW
    pgbouncer=#
    pgbouncer=# SHOW POOLS;
     database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode
    -----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
     bouncer   | bounce    |         2 |          0 |         0 |       0 |       1 |         0 |        0 |       0 |          0 | session
     pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
    (2 rows)
    
    pgbouncer=# SHOW CLIENTS;
     type |  user  | database  | state  |   addr    | port  | local_addr | local_port |      connect_time       |      request_time       | wait | wait_us | close_needed |    ptr    | link | remote_pid | tls
    ------+--------+-----------+--------+-----------+-------+------------+------------+-------------------------+-------------------------+------+---------+--------------+-----------+------+------------+-----
     C    | bounce | bouncer   | active | 127.0.0.1 | 40322 | 127.0.0.1  |       6432 | 2019-11-07 15:24:40 CET | 2019-11-07 15:24:40 CET |    0 |       0 |            0 | 0x1bd9598 |      |          0 |
     C    | bounce | bouncer   | active | 127.0.0.1 | 40332 | 127.0.0.1  |       6432 | 2019-11-07 15:25:12 CET | 2019-11-07 15:25:12 CET |    0 |       0 |            0 | 0x1bd97b0 |      |          0 |
     C    | bounce | pgbouncer | active | 127.0.0.1 | 40314 | 127.0.0.1  |       6432 | 2019-11-07 15:21:50 CET | 2019-11-07 15:25:36 CET |  221 |  440169 |            0 | 0x1bd9380 |      |          0 |
    (3 rows)
    
    Conclusion

    The above configuration is only a simple example how the configuration can look like. Of course there are many more specifics you can define. PgBouncer is a great tools for connection pooling and can help you to scale down the memory usage of your server. The connections of a pool are always available and in contrast to forking processes, it does not need reach out the database to establish a connection every time. The connections are just there.

    Cet article Connection pooling with PgBouncer est apparu en premier sur Blog dbi services.

    Brazilian Luxury Hotel Uses Oracle to Raise the Bar on Guest Experiences

    Oracle Press Releases - Thu, 2019-11-07 07:00
    Press Release
    Brazilian Luxury Hotel Uses Oracle to Raise the Bar on Guest Experiences B Hotel Brasília personalizes guest services and reporting with Oracle Hospitality solutions

    Redwood Shores, Calif.—Nov 7, 2019

    B Hotel Brasília, an independent luxury hotel located on Brasília’s spectacular Eixo Monumental stretch, has deployed an integrated suite of Oracle Hospitality and Oracle Food and Beverage cloud solutions to modernize its operations across the property and offer guests a stellar stay. 

    “Our search for technology partners naturally led us to Oracle because of its industry recognition and the solution’s ability to meet our requirements,” said Ana Paula Ernesto, CEO, B Hotel Brasília. “Transitioning to cloud technology enables our staff to focus less on managing technology and more on curating the luxury experiences our guests demand. With a single view of operations across our hotel and dining facilities we can ensure guests are recognized across touchpoints while providing management with visibility into property performance.”  

    Oracle Hospitality OPERA Property provides B Hotel Brasília, which offers 306 rooms and a rooftop bar and pool overlooking Brasília’s skyline, with a single view of every guest touchpoint from reservations and check-in to dining. The technology arms hotel personnel with detailed guest information allowing them to deliver more personalized services. In addition, OPERA enables hotel management to prepare and deliver a variety of reports for management including arrivals, departures, revenue and ratings – all from an easy-to-use dashboard view.

    The Oracle Food and Beverage MICROS Simphony Cloud Point of Sale system for restaurants was implemented across the hotel’s dining venues including B Restaurant, Bar 16 and the Lobby Bar, offering wait staff the ability to take orders from mobile devices and deliver faster and more accurate food and beverage service to diners. 

    “With our Oracle hospitality and food and beverage cloud solutions, we are uniquely positioned to serve as a strategic partner to both independent hotels and global chains that want to increase guest satisfaction, streamline operations and increase on-property revenues,” said Tanya Pratt, Associate Vice President, Oracle Hospitality. “With B Hotel Brasília’s digital transformation they have the power to create a competitive advantage with stellar service that can increase the property’s recognition globally.”

    Watch the B Hotel Brasília video to learn how Oracle Hospitality Solutions are helping to drive the property’s growth.

    embedBrightcove('responsive', false, 'single', '6099527912001');



    Learn more about Oracle’s Hospitality Solutions here.

    Contact Info
    Christine Allen
    Oracle
    +1.603.743.4534
    christine.allen@oracle.com
    About Oracle Hospitality

    Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. 

    For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality 

    About Oracle

    The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

    Trademarks

    Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Talk to a Press Contact

    Christine Allen

    • +1.603.743.4534

    Pages

    Subscribe to Oracle FAQ aggregator