This belongs in the "really should be obvious" category

A relatively new OEM12c and Oracle 11.2.0.3 installation, single server.

The VM was taken down for a memory upgrade, hurrah!  Turns out that 4GB really isn't quite enough for OEM and the database on the same box to work together nicely.  I mean, it worked OK, but didn't work quickly and often struggled when busy.

The server came back up, and the gcstartup script fired, starting OEM.  At this point we're not using clusterware (don't ask me why, I don't know. I'm very used to it and consider it "the norm", but new job, new challenges, right?) and don't have an init script to start up the database on boot, so OMS couldn't talk to the repository.  I started up the database through SQL*Plus thinking "yep, that'll do it".

But apparently not.  The log (/u1/app/oracle/gc_inst/user_projects/domains/GCDomain/servers/EMGC_OMS1/logs/EMGC_OMS1.out) was  flooded with "Network error: can't connect to database" type errors.

I was able to connect through SQL*Plus as sysdba, the database was open, there was no network connection involved.  It took me a few minutes to test if I could connect remotely through SQL Developer.  It failed.  At that point (like it wasn't before...) it was obvious: the listener.  I facepalmed and remembered that in a non-clusterware environment there is no automatic startup of dependencies.

"lsnrctl status" confirmed it, the listener was down.  "lsnrctl start LISTENER" started it, and the log started to look much healthier.

A few minutes later, we're all back to normal.  And now my mailbox is flooded with emails telling me that the EM service was down.  Must remember to blackout the targets next time.

Comments

Popular posts from this blog

Data Guard with Transparent Application Failover (TAF)

RMAN-05531 During RMAN Duplicate from Active Data Guard Standby

Data pump - "ORA-39786: Number of columns does not match between export and import databases"