export SCHEMA=schema1
export OUTDIR=ORAOUT
expdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${SCHEMA}.dmp LOGFILE=${SCHEMA}.explog
...where SCHEMA1 is an example Schema name and ORAOUT is an example Directory name.
To export a schema definition only (without any data):
export SCHEMA=schema1
export OUTDIR=ORAOUT
expdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${SCHEMA}.dmp LOGFILE=${SCHEMA}.explog CONTENT=metadata_only
export SCHEMA=schema1,schema2,schema3,etc
export DMPNAME=NameOfDumpfileWithoutDmpSuffix
export OUTDIR=ORAOUT
expdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.explog
export SCHEMA=schema1
export OUTDIR=ORAOUT
export TABLE=table
expdp TABLES=${SCHEMA}.${TABLE} DIRECTORY=${OUTDIR} DUMPFILE=${TABLE}.dmp LOGFILE=${TABLE}.explog
NOTE: if you connect as the table owner you can omit the schema from the TABLES= sectioncat > /tmp/parfile.txt <<EOF
SCHEMAS=schema1
DIRECTORY=ORAOUT
DUMPFILE=dumpfile.dmp
INCLUDE=TABLE:"IN ('table1','table2','table3','etc')"
LOGFILE=dumpfile.explog
EOF
expdp PARFILE=/tmp/parfile.txt
If you have a license for the Advanced Compression option you can add this option to your exports...
COMPRESSION=ALL
CONSISTENT=Y
which is synonymous with this (without the work of finding out the scn)...
FLASHBACK_SCN=SCN
Alternatively use soemthing like..
FLASHBACK_TIME=SYSTIMESTAMP
The easiest way to do this is to use SQL Developer. Right click on the Package and select "export". This will start a Wizard to guide you through the steps to create an SQL file that can be run to recreate the package.
If you want to copy a user from one instance to another without taking any objects then the easiest way to to this is by using DBMS_METADATA. See the "Users" page for an example of doing this.
export SCHEMA=schema1
export OUTDIR=ORAOUT
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${SCHEMA}.dmp LOGFILE=${SCHEMA}.implog
export SCHEMA=schema1,schema2,schema3,etc
export DMPNAME=NameOfDumpfileWithoutDmpSuffix
export OUTDIR=ORAOUT
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.implog
without Grants and Synonyms...
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.implog EXCLUDE=GRANTS,SYNONYMS
export SCHEMA=schema1
export TABLE=table1
export OUTDIR=ORAOUT
impdp TABLES=${SCHEMA}.${TABLE} DIRECTORY=${OUTDIR} DUMPFILE=${TABLE}.dmp LOGFILE=${TABLE}.implog
cat > /tmp/parfile.txt <<EOF
SCHEMAS=schema1
DIRECTORY=ORAOUT
DUMPFILE=dumpfile.dmp
INCLUDE=TABLE:"IN ('table1','table2','table3','etc')"
LOGFILE=dumpfile.implog
REMAP_TABLESPACE=source_ts:target_ts
EOF
impdp PARFILE=/tmp/parfile.txt
To duplicate a schema from an existing Schema Export file...
export SOURCE_SCHEMA=schema1
export TARGET_SCHEMA=schema2
export SOURCE_TS=tablespace1
export TARGET_TS=tablespace2
export OUTDIR=ORAOUT
impdp SCHEMAS=${SOURCE_SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${SOURCE_SCHEMA}.dmp LOGFILE=${SOURCE_SCHEMA}.duplog \
REMAP_SCHEMA=${SOURCE_SCHEMA}:${TARGET_SCHEMA} REMAP_TABLESPACE=${SOURCE_TS}:${TARGET_TS}
NOTE: For an alternative way of achieving this see the DataPump API section below.export SCHEMA=schema1,schema2,schema3,etc
export DMPNAME=NameOfDumpfileWithoutDmpSuffix
export OUTDIR=ORAOUT
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.ddllog SQLFILE=${DMPNAME}.sql
The example below gives the minimum recommended commands to clone an existing schema to a new schema.
DECLARE
jobhandle NUMBER;
BEGIN
jobhandle := DBMS_DATAPUMP.OPEN(operation => 'IMPORT',
job_mode => 'SCHEMA',
remote_link => 'DBLINKNAME',
job_name => 'SOURCE_SCHEMA_CLONE');
DBMS_DATAPUMP.ADD_FILE (handle => jobhandle,
filename => 'SOURCE_SCHEMA_CLONE.log',
directory => 'ORAOUT',
filetype => DBMS_DATAPUMP.ku$_file_type_log_file);
DBMS_DATAPUMP.METADATA_FILTER(jobhandle,'SCHEMA_LIST','''SOURCE_SCHEMA''');
DBMS_DATAPUMP.METADATA_REMAP(jobhandle,'REMAP_SCHEMA','SOURCE_SCHEMA','TARGET_SCHEMA');
DBMS_DATAPUMP.METADATA_REMAP(jobhandle,'REMAP_TABLESPACE','SOURCE_TS','TARGET_TS');
DBMS_DATAPUMP.START_JOB(jobhandle);
END;
/
...where DBLINKNAME is an example Database Link to the source instance...and SOURCE_SCHEMA_CLONE is an example job_name...and ORAOUT is an example Directory name (target for the log file only)...and SOURCE_SCHEMA is an example source Schema and TARGET_SCHEMA is an example target Schema....and SOURCE_TS and TARGET_TS are example Tablespace names.Some useful optional parameters (to be placed before DBMS_DATAPUMP.START_JOB in the example above)...
DBMS_DATAPUMP.METADATA_FILTER((jobhandle,'EXCLUDE_PATH_EXPR',''GRANTS'');
...where GRANTS indicates that GRANTS should be excludedDBMS_DATAPUMP.SET_PARAMETER(jobhandle,'FLASHBACK_SCN',999999999999);
...where 999999999999 is an example SCN.DBMS_DATAPUMP.SET_PARAMETER(jobhandle,'TABLE_EXISTS_ACTION','REPLACE');
... to replace exiting objects in the target schema with objects from the export fileRun on both Source and Target databases...
SELECT value
FROM NLS_DATABASE_PARAMETERS
WHERE parameter='NLS_CHARACTERSET';
If the target database has a characterset that is different to the source database then there could be data loss (unless the target database characterset is a subset of the source database). For example, if you export from a database with WE8MSWIN1252 and import into a database with WE8ISO8859P1 then any occurrences of the Euro symbol will be lost. You are likey to get errors in the DataPump import log (e.g. ORA-39346)
Run on the Source database...
SELECT DISTINCT grantee
FROM dba_tab_privs
WHERE owner = '&SCHEMA';
If the grantees do not exist in the target database then you will get errors in the DataPump import log (e.g. ORA-01917).
Run on the Source database...
SET LINESIZE 132
SET PAGESIZE 40
COLUMN SYNONYM_NAME FORMAT A30
COLUMN TABLE_OWNER FORMAT A30
COLUMN TABLE_NAME FORMAT A30
SELECT synonym_name, table_owner, table_name
FROM dba_synonyms
WHERE owner = '&&SCHEMA'
AND table_owner != '&SCHEMA';
If the table_owners and table_names do not exist in the target database then you will get errors in the DataPump import log (e.g. ORA-39082)
If data pump export starts to work very slow then check "EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)"
ALTER SYSTEM SET STREAMS_POOL_SIZE=200M SCOPE=BOTH;
If increasing STREAMS_POOL_SIZE does not help then you have to restart database also.
ORA-39083: Object type TYPE failed to create with error:
ORA-02304: invalid object identifier literal
During Schema Duplication... The object types already exist in the source schema. When the types are exported, we also export the object_identifier (OID) of the types. Within the current archictecture, the object-identifier needs to be unique in the database. During import (into the same database), we try to create the same object types in the target schema. As the OID of the types already exists in the source schema, the types cannot be created in the target schema.
Use the Import DataPump parameter TRANSFORM which enables you to alter object creation DDL for the types. The value N for the transform name OID implies that a new OID is assigned.
export SOURCE_SCHEMA=schema1
export TARGET_SCHEMA=schema2
export SOURCE_TS=tablespace1
export TARGET_TS=tablespace2
export OUTDIR=ORAOUT
impdp SCHEMAS=${SOURCE_SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${SOURCE_SCHEMA}.dmp LOGFILE=${SOURCE_SCHEMA}.duplog \
REMAP_SCHEMA=${SOURCE_SCHEMA}:${TARGET_SCHEMA} TRANSFORM=oid:n
ORA-39346: data loss in character set conversion for object PROCOBJ:"OWNER"."OBJECT_NAME"
NLS_CHARACTERSET mismatch between source and target databases during DataPump Import.
Check with:
SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter='NLS_CHARACTERSET';
Ideally, both databases should be converted to a common characterset before you start. For 12.1+ it is recommended that this is AL32UTF8.
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'NAME' does not exist
A User or Role does not exist in the target database so the GRANT cannot be made.
Create the User in the target database
ORA-39082: Object type OBJECT TYPE:"OWNER"."OBJECT_NAME" created with compilation warnings
It is likely that a dependency does not exist in the target database (i.e. this error is more common when performing a schema duplicate between two different Oracle Instances).
Use SQL Developer to attempt to compile the object on the target instance and review the errors
ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading of a secondary time zone data file is in an active state
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.implog
cat ${DMPNAME}.implog
This was due to a DST_UPGRADE_STATE status of UPGRADE...
SELECT property_name, SUBSTR(property_value, 1, 30) value
FROM database_properties
WHERE property_name LIKE 'DST_%'
ORDER BY property_name;
CONNECT / AS SYSDBA
ALTER SESSION SET "_with_subquery"=materialize;
ALTER SESSION SET "_simple_view_merging"=TRUE;
SET SERVEROUTPUT ON
VAR numfail NUMBER
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/
SELECT property_name, SUBSTR(property_value, 1, 30) value
FROM database_properties
WHERE property_name LIKE 'DST_%'
ORDER BY property_name;
ORA-30094: failed to find the time zone data file for version 18 in $ORACLE_HOME/oracore/zoneinfo
impdp SCHEMAS=${SCHEMA} DIRECTORY=${OUTDIR} DUMPFILE=${DMPNAME}.dmp LOGFILE=${DMPNAME}.implog
cat ${DMPNAME}.implog
Source database has a higher timezone version than the target database.
Upgrade the timezone of the target database.