http://appsdbasworld.blogspot.com

appsdbasworld

 
APPS R12
  • R12.AD.A.DELTA.4 Patch
  • Disable concurrent manager
  • Increasing the number oacore process type(JVM) and...
  • Release12 installation
  • InitSID.ora settings for release12
  • Creating custom top in release12
  • Changing apps password
  • Apps questions
  • Workflow Notification Mailer Setup
  • Schedule downtime in R12
  • Enable/Disable restrict session of Apache
  • APPS 11i
  • Indian localization patch
  • Opening forms from linux machine
  • Replacing Jinitiator with JRE for Oracle Apps 11i
  • Enabling SSL for E-Business suite 11i
  • Executing gather schema statistics at backend
  • Cleaning temporary tablespace
  • How to check versions
  • How to find patchset level
  • Cleaning concurrent manager tables
  • Database General
  • Create A Production (Full or Partial) Duplicate On...
  • Creating a Duplicate Database on a New Host Using ...
  • Database Cloning
  • 11g top new features
  • New background processes in 10g
  • 10g Data Pump
  • Installing Oracle Database 9i (9.2.0.8) on RHEL 4
  • Query to kill inactive sessions
  • Installing 11g(11.1.0) database on Redhat Linux 4
  • Check Deprecated Initialization Parameters in 11g
  • Online backup 9i & 10g
  • Enable/Disable archivelog in 9i
  • Creating recovery catalog in remote database
  • Creating database with create database command
  • Application Server
  • web cache cloning
  • 10g AS installation with linux
  • Operating System
  • Script to start/stop R12 services
  • Basic commands to check system statistics
  • Statspack analyzer
  • Analyze your statspack report
  • Search
    Only search this Blog
    E-mail Subscription

    Enter your email address:

    Delivered by FeedBurner

    Others Coming Soon!
    Your Ad Here
    10g Data Pump
    Tuesday, July 29, 2008
    Data pump:
    ---------------
    In Oracle 10g, exp and imp have been redesigned as the Oracle Data Pump, which is much faster than these old client commands. Data Pump runs as a job inside the database, rather than as a stand-alone client application. This means that jobs are somewhat independent of the process that started the export or import. One machine (say a scheduled job) could start the export, while another machine (such as a DBA's laptop) can check the status of the job. Since the job is inside the database


    Basic settings:
    --------------------
    if you want to export to a file, the first thing that you must do is create a database DIRECTORY object for the output directory, and grant access to users who will be doing exports and imports.

    Why Directory Objects?
    They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.

    For example will use scott user account and create a directory object it can access,

    connect to sqlplus as sys user,

    $sqlplus '/as sysdba'

    make sure that scott user account has unlocked by connecting to sqlplus as scott user.

    Assign create directory privilege to scott,

    GRANT CREATE ANY DIRECTORY TO scott;
    CREATE OR REPLACE DIRECTORY test_dir AS '/u01/logicalbkp';
    Assign read,write privilege to scott on that particular directory where we need to take the backup.
    GRANT READ, WRITE ON DIRECTORY test_dir TO scott;
    Tables export and import:
    -----------------------------------

    expdp scott/tiger tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

    impdp scott/tiger tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
    TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

    Schemas export and import:
    ------------------------------------

    expdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
    impdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log


    Database exports and imports:
    -------------------------------

    expdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

    impdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log

    Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:
    expdp scott/tiger schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
    The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs:
    system@db10g> select * from dba_datapump_jobs;

    OWNER_NAME JOB_NAME OPERATION
    ------------------------------ ------------------------------ ------------------------------
    JOB_MODE STATE DEGREE ATTACHED_SESSIONS
    ------------------------------ ------------------------------ ---------- -----------------
    SYSTEM SYS_EXPORT_FULL_01 EXPORT
    FULL EXECUTING 1 1

    The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export:
    expdp scott/tiger schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

    expdp scott/tiger schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

    All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job:

    Export> status

    Job: SYS_EXPORT_FULL_01
    Operation: EXPORT
    Mode: FULL
    State: EXECUTING
    Bytes Processed: 0
    Current Parallelism: 1
    Job Error Count: 0
    Dump File: D:TEMPDB10G.DMP
    bytes written: 4,096

    Worker 1 Status:
    State: EXECUTING
    Object Schema: SYSMAN
    Object Name: MGMT_CONTAINER_CRED_ARRAY
    Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
    Completed Objects: 261
    Total Objects: 261
    The HELP=Y option displays the available parameters:

    expdp help=y
    Export: Release 10.1.0.2.0 - Production on Tuesday, 23 March, 2004 8:33

    Copyright (c) 2003, Oracle. All rights reserved.


    The Data Pump export utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:

    Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    You can control how Export runs by entering the 'expdp' command followed
    by various parameters. To specify parameters, you use keywords:

    Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
    or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

    USERID must be the first parameter on the command line.

    Keyword Description (Default)
    ------------------------------------------------------------------------------
    ATTACH Attach to existing job, e.g. ATTACH [=job name].
    CONTENT Specifies data to unload where the valid keywords are:
    (ALL), DATA_ONLY, and METADATA_ONLY.
    DIRECTORY Directory object to be used for dumpfiles and logfiles.
    DUMPFILE List of destination dump files (expdat.dmp),
    e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ESTIMATE Calculate job estimates where the valid keywords are:
    (BLOCKS) and STATISTICS.
    ESTIMATE_ONLY Calculate job estimates without performing the export.
    EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
    FILESIZE Specify the size of each dumpfile in units of bytes.
    FLASHBACK_SCN SCN used to set session snapshot back to.
    FLASHBACK_TIME Time used to get the SCN closest to the specified time.
    FULL Export entire database (N).
    HELP Display Help messages (N).
    INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
    JOB_NAME Name of export job to create.
    LOGFILE Log file name (export.log).
    NETWORK_LINK Name of remote database link to the source system.
    NOLOGFILE Do not write logfile (N).
    PARALLEL Change the number of active workers for current job.
    PARFILE Specify parameter file.
    QUERY Predicate clause used to export a subset of a table.
    SCHEMAS List of schemas to export (login schema).
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    TABLES Identifies a list of tables to export - one schema only.
    TABLESPACES Identifies a list of tablespaces to export.
    TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
    TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
    VERSION Version of objects to export where valid keywords are:
    (COMPATIBLE), LATEST, or any valid database version.

    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed

    Command Description
    ------------------------------------------------------------------------------
    ADD_FILE Add dumpfile to dumpfile set.
    ADD_FILE=dumpfile-name
    CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
    EXIT_CLIENT Quit client session and leave job running.
    HELP Summarize interactive commands.
    KILL_JOB Detach and delete job.
    PARALLEL Change the number of active workers for current job.
    PARALLEL=.
    START_JOB Start/resume current job.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STATUS=[interval]
    STOP_JOB Orderly shutdown of job execution and exits the client.
    STOP_JOB=IMMEDIATE performs an immediate shutdown of the
    Data Pump job.

    impdp help=y

    Import: Release 10.1.0.2.0 - Production on Saturday, 11 September, 2004 17:22

    Copyright (c) 2003, Oracle. All rights reserved.


    The Data Pump Import utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:

    Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    You can control how Import runs by entering the 'impdp' command followed
    by various parameters. To specify parameters, you use keywords:

    Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

    USERID must be the first parameter on the command line.

    Keyword Description (Default)
    ------------------------------------------------------------------------------
    ATTACH Attach to existing job, e.g. ATTACH [=job name].
    CONTENT Specifies data to load where the valid keywords are:
    (ALL), DATA_ONLY, and METADATA_ONLY.
    DIRECTORY Directory object to be used for dump, log, and sql files.
    DUMPFILE List of dumpfiles to import from (expdat.dmp),
    e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ESTIMATE Calculate job estimates where the valid keywords are:
    (BLOCKS) and STATISTICS.
    EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
    FLASHBACK_SCN SCN used to set session snapshot back to.
    FLASHBACK_TIME Time used to get the SCN closest to the specified time.
    FULL Import everything from source (Y).
    HELP Display help messages (N).
    INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
    JOB_NAME Name of import job to create.
    LOGFILE Log file name (import.log).
    NETWORK_LINK Name of remote database link to the source system.
    NOLOGFILE Do not write logfile.
    PARALLEL Change the number of active workers for current job.
    PARFILE Specify parameter file.
    QUERY Predicate clause used to import a subset of a table.
    REMAP_DATAFILE Redefine datafile references in all DDL statements.
    REMAP_SCHEMA Objects from one schema are loaded into another schema.
    REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
    REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
    SCHEMAS List of schemas to import.
    SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
    SQLFILE Write all the SQL DDL to a specified file.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STREAMS_CONFIGURATION Enable the loading of Streams metadata
    TABLE_EXISTS_ACTION Action to take if imported object already exists.
    Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
    TABLES Identifies a list of tables to import.
    TABLESPACES Identifies a list of tablespaces to import.
    TRANSFORM Metadata transform to apply (Y/N) to specific objects.
    Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
    ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
    TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
    TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
    TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
    Only valid in NETWORK_LINK mode import operations.
    VERSION Version of objects to export where valid keywords are:
    (COMPATIBLE), LATEST, or any valid database version.
    Only valid for NETWORK_LINK and SQLFILE.

    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed

    Command Description (Default)
    ------------------------------------------------------------------------------
    CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
    EXIT_CLIENT Quit client session and leave job running.
    HELP Summarize interactive commands.
    KILL_JOB Detach and delete job.
    PARALLEL Change the number of active workers for current job.
    PARALLEL=.
    START_JOB Start/resume current job.
    START_JOB=SKIP_CURRENT will start the job after skipping
    any action which was in progress when job was stopped.
    STATUS Frequency (secs) job status is to be monitored where
    the default (0) will show new status when available.
    STATUS=[interval]
    STOP_JOB Orderly shutdown of job execution and exits the client.
    STOP_JOB=IMMEDIATE performs an immediate shutdown of the
    Data Pump job.
    posted by Senthil Kumaran @ 2:11 AM  
    0 Comments:
    Post a Comment
    << Home
     
    About Me

    Name: Senthil Kumaran
    Home: Oslo, Oslo, Norway
    About Me: I'm working as a oracle applications dba(Apps DBA OCP) and fusion middleware.This views are my own only and not neccessary to match with others
    See my complete profile Add to Technorati Favorites
    Previous Post
    Archives
    Links
    Powered by

    BLOGGER

    Add On

    Add to Google

    Add to My AOL

    Subscribe in Bloglines

    Subscribe in NewsGator Online

    © Oracle AppsDBAs World Blogspot Template by SENTHIL KUMARAN SL