Posts

Showing posts from June, 2014

Oracle Database 11.2.0.3 - Audit Trail Management

There seems to have been a glut of posts around managing an 11gR2 audit trail recently.  I've had this one in the works for a little while, so thought I'd push on regardless even if it's just for my sake. Auditing.  You've gone and set up logging to your database, so now you know exactly which app is behaving badly, exactly which users forget their passwords every time they log in, and who is trying to select from system tables that they really shouldn't be. The trouble is, if you end up with a badly behaving  DBConsole agent like I did , then you find yourself with several million entries in the audit table that you really don't want there, and the SYS tablespace grows much bigger than you'd like. There's 2 approaches to take here.  One is the timed approach - archive/delete anything older than X days - and this is the one that I will be focusing on here.  The other is the "damn it all, delete the whole lot", the easiest way to achieve t

Useful stuff to note - (Mainly) Bash scripting

I have a wealth of stuff stuck in Bash scripts that I've generated over the years.  Some of the most useful tips, and some of the most rarely used things that I want to note down for myself are probably: stat - such a simple but terribly useful command.  With the -f switch it gives you filesystem info about where the file you pass it is located.  Without it you get easy access to any information about the file you could ever need. Parameter substitution - insanely useful for stripping off paths or extensions from filenames, for setting default variables if they're not explicitly set by something else, string handling, providing sensible error messages, so much good stuff.  Have a look at these links for more information: http://www.tldp.org/LDP/abs/html/parameter-substitution.html http://www.cyberciti.biz/tips/bash-shell-parameter-substitution-2.html shellcheck.net .  Paste your shell script (or sections of it) into this tool, and it will tell you about any glaring erro

Today's learning: what happens when you try to run the same data pump import twice

Data pump -  http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_overview.htm#SUTIL100 . If you do anything in Oracle, you probably know about it, and have probably used it once or twice.  If you haven't and you have had the need to move data around, you're probably missing a trick.  It's highly flexible, speedy, and very useful. So, what is data pump?  Data pump is used to export and import data from a database.  That's it?  Yeah I know it sounds dull, but it's the flexibility that it offers you that is really handy.  It exports data to Oracle's own format - not CSV or anything infexible, so it can do dependency checking.  It allows you to export a full database or to export certain schemas, tables or tablespaces, to export to a specific version of dump file (e.g. from an 11.2 database to a format that 10.2 can read), to import schemas tables or tablespaces from a export file and rename them on the fly whilst also updating references within the databas

Temporary Tablespaces - a misadventure

Looking at the config of the databases in my new position, early on I decided that the temporary tablespace of our production system is WAY oversized for the database and queries it's serving.  We never use more than about 15% of it, even under heavy load. One evening, in the downtime/maintenance period, I decided to reduce it.  My first attempt at reducing the datafile size with an " alter database tempfile 'x' resize nG; " failed with an error: ERROR at line 1: ORA-03297: file contains used data beyond requested RESIZE value Makes sense.  From there, I followed a process I muddled together from some googling which involved: Add another temporary tablespace   CREATE TEMPORARY TABLESPACE TEMP2 SIZE nG Set this to be the default temporary tablespace    ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2 Remove the initial temporary tablespace   DROP TABLESPACE TEMP Rename the new temporary tablespace    ALTER TABLESPACE TEMP2 RENAME TO TEMP This process

clone.pl - Thin provisioning of Oracle databases - part deux

My experiments with clone.pl are coming to an end.  I have 2-stage script written to almost completely automate the cloning process. It's made up of 2 scripts.  Script 1: Does a log switch to ensure we're only needing to move data that actually changed during the image copy Takes a hot image backup of the database Does another log switch Works out which archivelogs are needed for recovery Creates an RMAN script to actually perform the recovery Script 2, which does the creation of the clone, has a few prerequisites: Oracle binaries installed (one-time) Oracle binaries configured for dNFS (one-time) The required NFS mount point set up (separate mount for each clone?  Or could we use a shared mount point? e.g. /databases is an NFS mount, /databases/DB1 /databases/DB2 and so on) Access to the image copy taken by script 1 Access to the RMAN script created by script 1 Environment variables: $SOURCE_SID, $ORACLE_BASE, $NEW_SID Access to pfile named initSOURCE_SID .o

clone.pl - The art of cloning a database easily and quickly

In database patch set 11.2.0.2, Oracle quietly rolled out the ability to create a clone of your database (relatively) quickly using dNFS (Direct NFS).  See DocID 1210656.1 for details ( https://support.oracle.com/epmos/faces/DocumentDisplay?id=1210656.1 ). The idea is familiar to anyone who's done any kind of data virtualisation before.  Starting from a base point (the read-only image copy), all reads come from the base, any changes are recorded in delta files (or change logs).  When the database reads data, it first of all checks if there's a newer version in the delta files, and if it can't find anything, reads from the source file. I'm not aiming to write up a how-to guide here as there are several excellent ones already (the one I keep on referring to is Tim Hall's Oracle Base blog here: http://www.oracle-base.com/articles/11g/clonedb-11gr2.php ) The basics are: Take an image copy of your source database (RMAN or copy the datafiles using OS utils while

Today's lesson is one in being careful using tab completion

Today I was exporting a complete schema from one production oracle database (11.2) and importing the same data into a test Oracle database (10.2), also changing the schema it was going to.  Not rocket science, a few corners with versions that you forget about and swear about, but nothing too hard: This was the export: expdp SYSTEM \ dumpfile=xxx_10_2_export.dmp \ full=n \ schemas=xxxxxxxx \ directory=DMP \ logfile=xxx_10_2_export.log \ version=10.2 I transferred the dump file using rsync, and this is the import that I used: impdp SYSTEM \ dumpfile=xxx_10_2_export.dmp \ schemas=xxxxxxxx \ remap_schema=xxxxxxxx:yyyyyyyy \ directory=IMPDIR \ logfile=xxx_10_2_export.dmp \ version=10.2 Look at what I got back... Connected to: Oracle Database 10g Release 10.2.0.4.0 - 64bit Production ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31619: invalid dump file "/home/oracle/imp/xxx_10_2_export.dmp" ORA-27091: unable to queue I/O ORA-27

ORA-1658 - A lesson in creating databases with care

Today, I had a rush job to create and populate a database using data from some of the prod schema.  I was going to do an RMAN duplicate, but that ended badly... More on that later maybe, if I ever figure out what happened. I created the database using DBCA, created a directory within the DB, transferred over a Data Pump export to the right place and let it rip. Only to see, almost immediately: ORA-39171: Job is experiencing a resumable wait. ORA-01658: unable to create INITIAL extent for segment in tablespace XXXXXX Bugger.  I did set up those tablespaces with datafiles that Oracle can write to, right?  I wasn't quite that stupid. However, all of the tablespaces and datafiles that I created had autoextend disabled, in what I am assured is an easy and fairly minor oversight. The second I altered it to be auto-extending, the datapump import continued as if nothing had ever happened. Posting this because someone else may have the same symptoms - all the hits I saw for thi