UNIX Filesystems & Directories for Oracle Databases

SSD/Flash Considerations


Placing redo logs on SSD is not recommended. Exadata now has a “Smart Flash Logging” feature that uses redo logs on SSD. Note that it uses redo logs *also* on SSD. This feature allows Oracle to write redo in parallel to a file on SSD and a file on the magnetic disk, and finish the operation when one of the calls is successful. This is a no-lose situation that can only improve performance. This is not the case if you place your redo logs on both SSD and magnetic disks yourself, in which case the redo write will finish when the slowest device finishes writing, making it a no-win situation.
Even if you place all copies of your redo logs on the SSD, you are still likely to see performance degradation. This is expected because as we discussed before, SSD doesn’t have any performance benefit for sequential writes of the type redo logs usually experience, and it may even end up being slower. Our experience at Pythian with customers attempting to place redo logs on SSD confirms this issue, and so do the tests ran by Guy Harrison.
https://blog.pythian.com/de-confusing-ssd-for-oracle-databases/
NOTE that SSD is not the same as an all-flash array. An all-flash array is able to avoid the consequences of [the above] restrictions by managing the flash globally, so that erases do not block reads and writes.
https://flashdba.com/2013/08/22/storage-myths-put-oracle-redo-on-ssd/
In September 2012 Oracle set a world record TPC-C benchmark. The redo was located on a RAID set consisting of 600GB 15K RPM disk drives.

The bottom line, as I understand it, is that an all-flash array (not SSD) should be able to provide comparable Oracle Redo performance to 15K RPM disk drives especially in a SAN configuration where there is likely to be cache in front of all storage to effectively hide any write bottlenecks. This may not be the most cost efficient way to deliver that level of redo performance. However, if your entire data centre runs on flash storage, then buying disk drives just to service Oracle Redo would appear to be unnecessary.

Create Filesystems

All Versions
  • /u01

  • /u02

  • /fra

  • /redo1

  • /redo2

  • /controlfile1

  • /controlfile2

/u01

  • The /u01 filesystem holds the ORACLE_BASE, ORACLE_HOME, scripts, installation files, and text/xml log files.

  • Oracle Database 19c needs a minimum of 10.2GB on this filesytem for installation.

Standard Subdirectories

Subdirectory Description

/u01/app/oracle ORACLE_BASE

/u01/dba DBA Scripts

/u01/img Installation Files

/u01/in Input files

/u01/out Output files

As 'root'...

chown -R oracle:oinstall /u01

As 'oracle'...

mkdir -p /u01/app/oracle

mkdir -p /u01/dba

mkdir -p /u01/img

mkdir -p /u01/in

mkdir -p /u01/out

chown -R oracle:oinstall /u01

chmod -R 775 /u01/app/

chmod -R 775 /u01/dba/

chmod -R 775 /u01/img/

chmod -R 775 /u01/in/

chmod -R 775 /u01/out/

12.2.0.1

As 'oracle'...

mkdir -p /u01/app/oracle/product/122010

chgrp oinstall /u01/app/oracle/product/122010

mkdir -p /u01/img/122010

12.1.0.2.0

As 'oracle'...

mkdir -p /u01/app/oracle/product/121020

chgrp oinstall /u01/app/oracle/product/121020

mkdir -p /u01/img/121020

/u02

As 'root'...

chown -R oracle:oinstall /u02

As 'oracle'...

mkdir -p /u02/app/oracle

As oracle user create directory for database's data files

cd /u02

mkdir /u02/oradata

chown oracle:oinstall /u02/oradata

chmod 775 /u02/oradata

/fra or /flashrecovery

AIX

Filesystem mount options

Oracle/IBM recommend the RBRW mount option for /fra (i.e. archivelogs and dumps).

  • Archive logs: Mount -o rbrw . Do not use CIO; use the jfs2 rbrw option

  • Dumps: Mount –o rbrw

sqlplus / as sysdba

SHUTDOWN IMMEDIATE

exit

umount /fra

mount /fra-o rbrw

lsfs /fra

sqlplus / as sysdba

STARTUP

exit

/redo1 & /redo2

The /redo filesystems (generally /redo1 and /redo2) hold the online redo log files for the database.

These filesystems are optional.If these filesystems do not exist then online redo logs tend to be found in /u02 and /flashrecovery.On AIX, these filesystems are generally created when switching a database server to use CIO.

/controlfile1 & /controlfile2

The /controlfile1 and /controlfile2 filesystems hold the Oracle database controlfiles.

These filesystems are optional.If these filesystems do not exist then controlfiles are generally found on /u01, /u02. or /flashrecovery.On AIX, these filesystems tend to be created when switching a database server to use CIO.

AIX Example

Create new filesystems /controlfile1 & /controlfile2 on hdisk19 & hdisk20 with 512 byte block size , 400M in size

NOTE: If using CIO with SETALL, CIO mount or both, you must create separate file systems for redo logs and control files (or a single filesystem for both), with an agblksize of 512 rather than the default 4 KB.

mkdir /controlfile1

mkdir /controlfile2

mklv -t jfs2 -y control01 orainstvg 400M hdisk19

mklv -t jfs2 -y control02 oraflashvg 400M hdisk20;

crfs -v jfs2 -d /dev/control01 -m /controlfile1 -a log=INLINE -a agblksize=512 -A yes

crfs -v jfs2 -d /dev/control02 -m /controlfile2 -a log=INLINE -a agblksize=512 -A yes

mkdir /controlfile1/DBNAME

mkdir /controlfile1/DBNAME

chown -R oracle:oinstall /controlfile1

chown -R oracle:oinstall /controlfile2

Resize filesystems

  • /tmp (6GB)

  • /opt (12GB)

  • /home/oracle (500MB free space)

/tmp can be 2GB for 11g and earlier

/tmp

/tmp should be 6GB+ for 12c and later but it can be 2GB for 11g and earlier.,

AIX

Check current value:

df -g /tmp

Change size (if necessary)....

Note: These steps assume there is enough space in the volume group to fulfil your request.

Determine the logical volume name and current filesystem size...

lvn=$(lsfs | grep /tmp | awk '{ print $1 }' | cut -d/ -f3)

siz=$(lsfs | grep /tmp | awk '{ print $5 }')

(( siz = ${siz} / 2 ))

(( siz = ${siz} / 1024 ))

echo ${lvn} ${siz}

Determine the physical partition (PP) size of the logical volume...

pps=$(lslv ${lvn} | grep "PP SIZE" | awk '{ print $6 }')

echo ${pps}

Divide the amount of extra space required by the LP size (assumed to be the same as the PP size) to calculate the number of LPs to add...

11g...

(( lps = 2048 - ${siz} ))

(( lps = ${lps} / ${siz} ))

echo ${lps}

extendlv ${lvn} ${lps}

chfs -a size=2G /tmp # 11g

df -g /tmp

12c/18c/19c...

(( lps = 6144 - ${siz} ))

(( lps = ${lps} / ${siz} ))

echo ${lps}

extendlv ${lvn} ${lps}

chfs -a size=6G /tmp # 12c

df -g /tmp

/home/oracle

/home/oracle needs 500MB free space.

AIX

The following instructions assume /home/oracle is part of the /home mount point and that a 2GB filesystem will yield a minimum of 500MB free space.

You may need to adjust appropriately.

Check current value:

df -g /home

Change size (if necessary)....

Note: These steps assume there is enough space in the volume group to fulfil your request.

Any issues should be discussed with a UNIX system administrator.

Determine the logical volume name and current filesystem size...

lvn=$(lsfs | grep /home| awk '{ print $1 }' | cut -d/ -f3)

siz=$(lsfs | grep /home| awk '{ print $5 }')

(( siz = ${siz} / 2 ))

(( siz = ${siz} / 1024 ))

echo ${lvn} ${siz}

Determine the physical partition (PP) size of the logical volume...

pps=$(lslv ${lvn} | grep "PP SIZE" | awk '{ print $6 }')

echo ${pps}

Divide the amount of extra space required by the LP size (assumed to be the same as the PP size) to calculate the number of LPs to add...

(( lps = 2048 - ${siz} ))

(( lps = ${lps} / ${siz} ))

echo ${lps}

extendlv ${lvn} ${lps}

chfs -a size=2G /home

df -g /home

/opt

AHF requires 11GB free in /opt

If /opt is smaller than 12GB, the following steps resize /opt to 12GB (12288MB)

AIX

Check...

df -g /opt

Change...

Ensure there is enough space in the volume group...

Note: These steps assume there is enough space in the volume group to fulfil your request.

lvn=$(lsfs | grep /opt| awk '{ print $1 }' | cut -d/ -f3)

vgn=$(lslv ${lvn} | grep "VOLUME GROUP" | awk '{ print $6 }')

fre=$(lsvg ${vgn} | grep "FREE PPs" | awk '{ print $7 }' | awk '{ sub(/\(/,""); print }')

(( fgb = ${fre} / 1024 ))

echo ${fre}'MB'

Determine the current filesystem size...

siz=$(lsfs | grep /opt | awk '{ print $5 }')

(( siz = ${siz} / 2 ))

(( siz = ${siz} / 1024 ))

echo ${lvn} ${siz}'MB'

Determine the physical partition (PP) size of the logical volume...

pps=$(lslv ${lvn} | grep "PP SIZE" | awk '{ print $6 }')

echo ${pps}

Divide the amount of extra space required by the LP size (assumed to be the same as the PP size) to calculate the number of LPs to add...

(( lps = 12288 - ${siz} ))

(( lps = ${lps} / ${siz} ))

echo ${lps}

extendlv ${lvn} ${lps}

chfs -a size=12G /opt

df -g /opt