Oracle Cloud PAAS Machine – Drives Missing – Backup Failing


At some point the PAAS machine was rebooted and the drives for the middleware instance did not remount which caused the following error when the backups were taking place:

Further investigation, once connected to the server, you will notice the drives have disappeared from the machine:

And notice errors in the /var/log/messages.

Log onto the machine as root and run lsblk and take note of the vg_binaries with no lvm  :

# lsblk
xvdb 202:16 0 21G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
├─xvdb2 202:18 0 4G 0 part [SWAP]
└─xvdb3 202:19 0 16G 0 part /
xvdc 202:32 0 14G 0 disk
├─vg_binaries-lv_tools (dm-1) 252:1 0 10G 0 lvm /u01/app/oracle/tools
├─vg_binaries-lv_mw (dm-2) 252:2 0 2G 0 lvm
└─vg_binaries-lv_jdk (dm-3) 252:3 0 2G 0 lvm
xvdd 202:48 0 6G 0 disk
└─vg_domains-lv_domains (dm-0) 252:0 0 6G 0 lvm

Mount them manually, once the mount point has been established.

# mount /dev/mapper/vg_domains-lv_domains /u01/data/otd-instance 
# mount /dev/mapper/vg_binaries-lv_mw /u01/app/oracle/middleware
# mount /dev/mapper/vg_binaries-lv_jdk /u01/jdk

Start any services you had on the machine.
So currently if the machine reboots, we have to manually mount the drives, until the RCA has been generated.
If you do wish to start OTD manually as the Oracle user :

$ export JAVA_HOME=/u01/jdk
$ /u01/data/otd-instance/admin-server/bin/startserv

Oracle Cloud OPC user – ‘Server refused our key’


So we wake up one day and we cannot connect to our cloud machine using OPC, What do you do? We try to connect via telnet and we get ‘Server refused our key’

First thing, let’s check by drilling into the service and looking at the ssh access does the key still exist?

We can see the key exists :


What we did was Re-Add the key below the original key, you could do the same thing with a different key and have 2 keys for the OPC user, This you may have to do if Oracle Support ask you to give them OPC access for an SR. Thereafter once Oracle support have finished you just remove their key.
So as per previous screen you just repeat the same key and click ‘Add New Key’.

You should now be able to telnet to the box again, but the interesting find, is that if you look at the authorized key you see the following:

#from lockup

Still waiting for Oracle to tell me what happened and what is #from lockup? – So far nether support nor development have an answer?


Oracle DBCS (Cloud) Scale Up Storage Steps

We have been alerted that our tablespace in DBCS, is reaching its limit.

Looking at our filesystem, we are using /u02 to store our datafiles and we have used 92%/

If you are running any cloud services that are using this database, you must stop them first

Before we attempt to extend this storage we need to stop all applications that maybe using this this database. Once this is done we log into our Oracle Cloud Service Portal and drill into the Oracle database cloud service we wish to add additional storage.
Once we are in the DBCS pane as per below, we select the burger bar and choose the scale up/down option.

On the next screen we have selected to extend the data storage volume by 100 GB. Then select ‘Yes’.

Once selected, we will see the notification of the ‘Scale Up – in progress’.

Once finished, if we log onto the machine and check the file system, we can see /u02 has been extended.

We can now log onto the database and add the additional data file.

Looking at the table space, we can see the space has been extended.

Restart any cloud services that were shut down for the scale up.
Happy Days J


Locking Up Your Cloud


FAL[client, ARC0]: Error 12154 for fetching gap sequence

Recently I came across the following error :
ERROR: failed to establish dependency between database ps1 and diskgroup resource ora.FRA.dg
Error 12154 received logging on to the standby
FAL[client, ARC0]: Error 12154 connecting to s1 for fetching gap sequence
Error 12154 received logging on to the standby
FAL[client, ARC0]: Error 12154 connecting to s2 for fetching gap sequence
Lets have a look to see what has happened and resolve.
1) If we log onto the standby and run the following we can see the gap.

SQL> select *
  2  from v$archive_gap;
---------- ------------- --------------
         1         74298          74419
         2         67767          67844

1b) Also as a different example you could log onto the ASM standby machine as Oracle and have a look at the archive logs and see if there is any missing.

cd FRA

If we look at the list we can identify missing log, it is in sequential order ls78live_1 is thread 1, ls78live_2 is thread 2., We can see below ls78live_1_144812_704059285.arc is missing.


2) On Primary database retrieve the missing archive log files based on the information we got from v$archive_gap.

rman target sys/** catalog rcuser/**@rmcatalog
RMAN> run
{allocate channel c1 type disk;
backup archivelog from sequence 74298 until sequence 74299 thread 1 format '/database/backup/s78live/gap/arch_1_%U';
backup archivelog from sequence 67767 until sequence 67768 thread 2 format '/database/backup/s78live/gap/arch_2_%U';

If the archive logs have been already removed you will get the following error.

Starting backup at 22-JUL-16
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 22-JUL-16.

In this case you will need to look through your backed up archive logs.
3) On Standby – lets restore the archive files

RMAN> run
{catalog start with '/database/backup/s82live/arch/1dayold';
allocate channel c1 type disk ;
allocate channel c2 type disk ;
restore archivelog from sequence 74298 until sequence 74299 thread 1;
restore archivelog from sequence 67767 until sequence 67768 thread 2;

12) We now enable the recovery of the standby.

SQL> sqlplus / as sysdba
SQL> alter database recover managed standby database using current logfile disconnect from session;

We have identified the gap, found the missing archive log’s, copied them to the standby and re-enabled the dataguard.


OTN Appreciation Day: ASM

First thing I have to say, Thanks Tim for this great idea. Tim Hall
I have to say one of my favorite features has to be the Oracle ASM.
Oracle ASM is the storage/file system where you store your data files and other database files. One of the great advantages I like is that you can add or remove disks without any downtime! To give you a great example is that a few months back I had to change the disks that our ASM was using to faster ones, we were able to add these new disks to the ASM and remove the old disks without any system outage.
A blog post I had written earlier show’s how this work’s – Adding New ASM Disks
If you don’t use ASM, I would be asking why!


POUG – 1st Polish International Oracle Conference

Amazing, Awesome some of the words that come to my mind. I wrote this on the plane but I am just posting it;

I have just landed back into the UK and I am still thinking about this conference. My journey began on Thursday, I was flying into Warsaw and my plan was delayed, I really did not want to miss the speaker dinner, a great opportunity to meet everyone. I managed to arrive in sufficient time. I checked in at the hotel and went to meet the rest of the group at the bar.

At the bar was Martin, Neil and Jim, these guys rock ;). Had a quick drink then met the rest of the group at reception?

It was great to meet all the speakers. I have to say we all were well looked after by the POUG organizers, we were taken to a restaurant for an amazing meal and thereafter a little stroll/sightseeing for the few who had lost all direction and sense; we ended up at Irish bar for a night cap.

(Day 1). There was a really good turnout easily I would say 150+ and some had travelled from the borders of Poland. There were 2 tracks running, I went to Jim’s talk DBA, Heal Thyself: Five diseases of IT Organizations. It was my first time at Jim’s talk, it was superbly delivered after I had left I decided to polish up on my slides for later. I missed Heli Helskyaho’s @helifromfinland talk about SQL developer that I really wanted to go to, I heard it was really good. I will catch you next time.

After lunch I went to Pieter’s @vanpupi and Philippe’s @pfierens talk about Oracle VM on Exadata, this was good as well, learnt some things that you will not find in any documents. Straight after this talk was my presentation. I was told it would be HDMI connection, but there was only a VGA cable (Panic, Panic, Panic) – luckily Philippe let me borrow his adapter as well as a ‘Powerpoint Pointer’. Thank You Philippe.

My talk was about extraction of cloud data to a premise BI and monitoring. The room was pretty full, that was a great sign, never really been nervous too much before but I was this time – as the talk progressed we moved to Cloud security and monitoring in the cloud as there was a great interest in this area. After my talk had finished we still had around 5m so we discussed the cloud aspects in more detail. I know there were some more questions on the monitoring side, if anybody did want any info, do get in contact me. This is another great side of user groups, you can go to one talk and ask other topic questions that may come to your mind.

Thereafter I sat into Robin’s @rmoff talk about OBIEE 12c Performance, I learnt some new things that will be great for me at work. I have to say you can never know everything there is always thing to learn.

After this talk we were whisked to the POUG Appreciation event as I would call it. They really looked after the speakers and attendees.

(Day 2) – It was a long night, and it was so amazing to see so many people the next day for Neil’s @chandlerDBA talk on ‘Why has my plan changed’. Being a man with many hats at my work and being a DBA for over 17 years, I enjoyed Neil’s talk as I am always fighting SQL performance looking forward to see his scripts. I would definitely recommend it, if you are at a different conference and he is talking – don’t miss itJ.

After Neil’s talk I decided to stay in this stream for Joze’s talk on ‘Opening the black box called “Cost Based Optimizer’, he’s right it is a black box. Thereafter I moved to Martin’s @MDWildlake talk ‘Tips on Bulk Data Processing with SQL and PL/SQL’ – It was a great insight in bulk processing data and this man like’s his beer 😉

Once we had consumed our lunch we had our final session of the day ‘#DBADEV,Bridging the gap between development and operation table’ This was a panel session with Sabine @oraesque, Martin @MDWidlake, Philippe @pfierens, Neil @ChandlerDBA, Piet @pdevisser and Erik @evrocs_nl. I think most people would be able to say something in this area, I am always fighting with developersJ. What a great way to end the conference.

All my breakfasts, lunch’s were timed with Pieter’s – Absolute Legend J. At the last breakfast Pieter, Eric and myself were talking how good the conference was and how they have set the bar really high, it is very high!. It can only get better ;). I can see this conference growing huge, from talking to some of the guests they would like talks on EM Monitoring, Middleware, APEX. Maybe next year have a few of these 😉

This was a super conference venue, food, equipment and the beer, excellently organized and delivered, it was great to meet all the speakers, organizers and guests. POUG is definitely now on the map. It was bloody marvelous 😉





EM13c Exalytics TimesTen Plugin

In my previous post we discovered the exalytics machine In this post, we will install an EM13c Exalytics TimesTen Plugin for TimesTen on Linux 5 the version we need is

  1. Create a user on the exalytics machine to be used for TimesTen.

On Exalytics host in TimesTen :

Command> create user emservice identified by emmonitor;

User created.

Command> grant admin to emservice;


  1. Get the plugin and deploy.

In EM13C Console: Setup/Extensibility/Self Update

Drill into Plugins

Select TimesTen (Even though we want, Download

Once downloaded, select Setup/Extensibility/Plugins

Select TimesTen, click Deploy On /Management Servers


Select Next

Select Next

Select next

Select Deploy

*This will shut down the OMS Server and restart during the deployment.

Once restarted, Go back to the Plug-ins and check it has been successfully deployed to the management server.


Select the TimesTen row, click deploy On, Management Agent.


In the Deploy Agent screen, drop down the version and select Click Continue

Select the exalytics server. Click Continue

Select next



Setup/Extensibility/Self Update/Add Target Declaratively

Select Host/Target Type as per below.

Click Add

Test Connection

Click Ok


Click Target Declaratively again. Select Host and this time TimesTen Instance.

Test Connection, Then OK.


We are now monitoring TimesTen.







Installing SOACS Hybrid Agent EM13C

In my last post we set up a hybrid agent and installed an OTD agent, in this post we shall create a SOACS agent. I am assuming you have already done the first few steps that were mentioned in my last post OTD Hybrid Agent (

  1. On SOACS Target Host

Insert the OMS server details into the host file:   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#Added for EM  ******

Create the Agent Directory

#mkdir /u01/app/agentHome
#chown oracle:oracle /u01/app/agentHome


  1. Add the target in EM
    Em Console>Setup>Add Targets Manually>Install Agent on Host

    Click Next


    Deploy Agent


    We will run script manually, so click continue all hosts.
    Log onto host and run the following as root.

    • /u01/app/agentHome/agent_13.
    • /u01/app/oraInventory/


    Click <done> in your agent install screen.

    If you go to Targets/Hosts, you will see the SOACS machine.

    Discovering SOA Suite

    This section describes the procedure for discovering the SOA Suit.

    1. Login to Oracle Enterprise Manager Grid Control.
    2. Click Targets and then Middleware.

      Oracle Enterprise Manager Grid Control displays the Middleware page that lists all the middleware targets being monitored.

    3. In the Middleware page, from the Add list, select Oracle Fusion Middleware / WebLogic Domain and click Go. Specify the Administration Server Host, Port, User Name, Password, Agent (local or remote) and the JMX Protocol and click Continue.

    Click Add Targets

    1. You will return to the Middleware page. You will see the SOA instances under the WebLogic Domain.


    Wait for a few minutes then drill into the soa target you will need to configure the functionality logins.


    Bugs and Fixes


    1. This Error Hospital screen will not work, by default.


    When you go to this screen it will ask you to set up the target information, when you try to do this you will get the following message.


     According to Oracle certifications, it’s not supported with 12.1

     bash-4.1$ pwd
    -bash-4.1$ mkdir extjlib

     #According to the error we need all these jar files.

    6.rulesdk2.jar ,
    7.xmlparserv2.jar , or ,

     Copy all the jar files.

     -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/oracle_common/modules/oracle.jrf_12.1.3/jrf-api.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-api.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/bpm/modules/oracle.bpm.mgmt_11.1.1/oracle.bpm.bpmn-em-tools.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/agentHome/agent_13. /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/oracle/middleware/oracle_common/modules/ /u01/app/agentHome/agent_13.
    -bash-4.1$ cp -p /u01/app/agentHome/agent_13. /u01/app/agentHome/agent_13.
    -bash-4.1$ ls -la
    total 74412
    drwxrwxr-x 2 oracle oracle     4096 Jul 29 19:09 .
    drwxr-xr-x 3 oracle oracle     4096 Jul 29 18:59 ..
    -rw-r----- 1 oracle oracle   166207 May  6  2014
    -rw-r----- 1 oracle oracle   147471 May  7  2014 jrf-api.jar
    -rw-r----- 1 oracle oracle 65873876 Jan 13  2016 oracle.bpm.bpmn-em-tools.jar
    -rw-r----- 1 oracle oracle    49893 Jan 13  2016 oracle-soa-client-api.jar
    -rw-r----- 1 oracle oracle  2063990 Jan 13  2016 rulesdk2.jar
    -rw-r----- 1 oracle oracle  1556107 Jan 13  2016 soa-infra-mgmt.jar
    -rw-r----- 1 oracle oracle    16081 Jan 13  2016 tracking-api.jar
    -rw-r--r-- 1 oracle oracle  4615123 Jul 20  2015 wlthint3client.jar
    -rw-r--r-- 1 oracle oracle  1684004 Oct  5  2015 xmlparserv2.jar
     -bash-4.1$ chmod 755 *.jar.

     Now go back to your screen and just do a save not a rescan. If you did a rescan it will still say some error with jar files, but it will say jar files do exist. Save.
     The screen works.

    1. SOA Composite statuses, if they go down they get marked as down but when they come back up the console does not refresh, you will need to do a weblogic domain refresh in the console. You could set up a job that does this refresh automatically.



     You have SOACS monitored from EM13C cloud control.


Oracle Traffic Director (OTD) Hybrid Agent EM13C

In this post we will set up to monitor OTD in the cloud.


When you use the Add Host Targets Wizard or EM CLI to install a Management Agent on a host running on Microsoft Windows, as a prerequisite, you must install Cygwin and start the SSH Daemon on the host. To do so, follow the steps listed in Section 5.3 and 5.4


  1. Generate Public and Private Keys on a linux box.

#ssh-keygen -b 2048 -t rsa

This will generate 2 keys a private and public key
Add the public key to the Oracle user (The user being used for connecting to the host from EM13C) on your target OTD host

  1. Set up the user in EM13c


Click on Setup, Security and then Named Credentials.  Click on Create under the Named Credentials section, As per below screen shot, insert your details and the keys you generated in step 1.

Click Save.

  1. Create a Hybrid Cloud Agent

You can use an existing agent, but you should create few agents for higher availability.

Go to Agent home on the oms server and do a

$emctl status agent

Take a note of the agent url : https://********.***.net:3872/emd/main/


$ ./emcli login -username=sysman
Enter password :
$ ./emcli register_hybridgateway_agent -hybridgateway_agent_list='


Make sure to restart the agent after you’re performed this step.  


  1. On OTD Target Host

 Insert the OMS server details into the host file:   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#Added for EM  ******


Create the Agent Directory

#mkdir /u01/app/agentHome
#chown oracle:oracle </span>/u01/app/agentHome


  1. Add the target in EM


Em Console

Setup > Add Target > Add Targets Manually

Install Agent on Host

Click Next

Deploy Agent

We will run script manually, so click continue all hosts.

Log onto host and run the following as root.


Click <done> in your agent install screen.

If you go to Targets/Hosts, you will see the machine.


  1. Now let’s add OTD

In Console, Setup,Add Target, Add targets Manually.

Select Add Using Guided Process

Select Traffic Director

Click Add, The screen below we are adding the info you would use to log into the Traffic Director Console. Also if you need to find the other info, this can be found in the following file /u01/data/otd-instance/admin-server/config/snmpagt.conf for the snmp port if it has been configured.

Click Continue

Click Add Targets

7.Verify OTD

In EM13c Console, Select Targets/Middleware/

The status cross will clear once the agent has collected the data

Drill into the OTD.

We can see Traffic Director Responses and Requests, The cross in the status is because the agent is trying to connect to the SNMP port, which we have disabled at the moment. We can still collect meaningful info, without this.