Home » Server Options » RAC & Failsafe » Settingup Coldfailover (Oracle/GI 19, Oracle Linux7)
Settingup Coldfailover [message #678066] Tue, 05 November 2019 22:08 Go to next message
chfloreck
Messages: 3
Registered: November 2019
Junior Member
I would like to setup an Coldfailover System using GI19.
I've done this using the Releases 12 and 18 with no problem.
This where the steps:
1) Install Grid Infrastructure 19 for Oracle Restart
2) Install Oracle Database 19 vor Restart (Software only)
3) Setup an database, as Restart
4) Remove the database from Cluster

Then I created an resource Script (see below)
When I try to add the resource I run into an error:
crsctl add resource FAILODB.db -type cluster_resource -file resource.txt
CRS-34921: Resource 'FAILODB.db' cannot have a dependency on resource 'ora.FAILOVER_DATA.dg' because they are not members of the same resource group.
CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'FAILODB.db'

The problem is caused by the diskgroups. If I remove them, everythings runs fine.
Any idea?

Thx
Christian


TYPE=cluster_resource
ACL=owner:oracle:rwx,pgrp:oinstall:r--,other::r--,group:dba:r-x,group:oper:r-x,user:oracle:r-x
ACTIONS=startoption,group:"oinstall",user:"oracle",group:"dba",group:"oper"
ACTION_SCRIPT=/u01/app/grid/grid19.3/crs/public/act_db_FAILODB.pl
ACTION_TIMEOUT=600
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Database resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_THRESHOLD=1
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
HOSTING_MEMBERS=raca1 raca2
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.RACA_DATA.dg,ora.RACA_FAST.dg,ora.RACA_MIRROR.dg) pullup(global:ora.RACA_DATA.dg,ora.RACA_FAST.dg,ora.RACA_MIRROR.dg) weak(type:ora.listener.type, uniform:ora.ons)
START_TIMEOUT=600
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.asm, shutdown:ora.RACA_DATA.dg, shutdown:ora.RACA_FAST.dg,shutdown:ora.RACA_MIRROR.dg)
STOP_TIMEOUT=600
UPTIME_THRESHOLD=1h
USER_WORKLOAD=yes
USE_STICKINESS=0

Re: Settingup Coldfailover [message #678067 is a reply to message #678066] Wed, 06 November 2019 01:26 Go to previous messageGo to next message
John Watson
Messages: 8074
Registered: January 2010
Location: Global Village
Senior Member
Welcome to the forum. Please read the OraFAQ Forum Guide and How to use code tags and make your code easier to read

Some questions:
What do you mean by "coldfailover"?
Why have you not created a database?
What do you mean by "setup an database, as restart"?
Why have you removed this database?
Why are you then registering the database with crsctl rather than srvctl?

It is all a bit confuing at the moment.

Please post the result of
crsctl status resource -t
crsctl status resource -t -init
Re: Settingup Coldfailover [message #678071 is a reply to message #678066] Wed, 06 November 2019 05:54 Go to previous messageGo to next message
chfloreck
Messages: 3
Registered: November 2019
Junior Member
Hi John

We've got Grid Infrastructure installed on two Servers.
The database itself is installed as "Single Instance". (As I do not want to use RAC or RAC One Node)

Coldfailover:
In Case of Switchover the Instance will be shutdown by an Perl Script (which issues an "shutdown immediate").
This is caused by shutting down the server on which the instance is active or by the admin manual.
In Case of an Server Crash the database is startet on the the other server.
This is handled by the clusterware.

To make this possible, I need to manually create a "resource", as I must not use the GI to handle the database.
In that case this woukd be RAC/RAC One Node options of the database - which must be payed for.

To create this resource, I create an database, selecting "ORACLE Restart", no extra costs.
At this point, GI will only handle the database on one server.
To make it possible to run the database on both servers, I must do this steps:
I remove the database from the managment of GI. Note I do not delete the database. (srvctl remove database -d DEMO1).
In the next step, I create a new resouce with the type cluster_resource.
This resource can run on both nodes of the GI. As defined in HOSTING_MEMBERS.

The ideas behind this:
There is no need of RAC Option, even more: The database could even be StandartEditon2 (no need of Enterprise Edition).
That saves You lots of money.

This easy way works fine in release 18.
As I read in another blog START_DEPENDENCIES on diskgroups is not possible in 19 anymore.
Strange thing Sad




[Updated on: Wed, 06 November 2019 05:55]

Report message to a moderator

Re: Settingup Coldfailover [message #678072 is a reply to message #678071] Wed, 06 November 2019 06:10 Go to previous messageGo to next message
John Watson
Messages: 8074
Registered: January 2010
Location: Global Village
Senior Member
I think you are making a complicated solution to a simple problem. The functionality you need is built into GI, you do it with server pools and policy management. Get rid of everything you have done since installing GI and then:

Create a server pool with both nodes, limited to one active member.
Create your single instance SE database as policy managed, and assign it to the pool (DBCA can do that for you).
That's all.

Failover is automatic, and legal because there is only ever an instance running on one node at a time. RAC One Node uses the same technology but does it with zero downtime because during the transition there is an instance running on each node, which is why you have to pay extra.
Re: Settingup Coldfailover [message #678073 is a reply to message #678072] Wed, 06 November 2019 14:50 Go to previous messageGo to next message
chfloreck
Messages: 3
Registered: November 2019
Junior Member
Serverpools seems to be the perfect approach.
But we went this way and run into into an nasty problem.
The first database can be installed fine, thats right.

However, if You add further instances...
Let's say we've got two nodes, RACA1 and RACA2. These are put into an serverpool.
The first instance will run on RACA1.
Now we create the next database, which grabs RACA2.
The third database finds no "free" Servers in the pool.
To get around this you have to move all databases to node RACA1.
This causes an downtime Smile - which is a nogo for our customers.

Perhaps this is handeld an other way in 19, I'll check that out.


Regards
Christian

[Updated on: Wed, 06 November 2019 14:52]

Report message to a moderator

Re: Settingup Coldfailover [message #678076 is a reply to message #678073] Fri, 08 November 2019 00:55 Go to previous message
John Watson
Messages: 8074
Registered: January 2010
Location: Global Village
Senior Member
I didn't realize you had several databases here.Quote:
Let's say we've got two nodes, RACA1 and RACA2. These are put into an serverpool.
The first instance will run on RACA1.
Now we create the next database, which grabs RACA2.
The third database finds no "free" Servers in the pool.
This is not right. Only one server will be active in the pool at any time, so if you assign several databases to the same pool they will all run on the same node.

Perhaps you have not set up your pools in the best way. One server can be in many pools, one database can be assigned to many pools, one pool can contain many databases. (It is all many-to-many relationships, which any relational engineer (like me) finds awkward). There are also sub-pools to consider: pools of pools, or parent and child pools. That might be the way to go, though I don't think there is much in the way of documentation and srvctl cannot handle them.
Previous Topic: Private interface and SAN storage interface under same subnet
Next Topic: server type for installing RAC
Goto Forum:
  


Current Time: Thu Dec 05 22:25:32 CST 2019