Oracle GoldenGate Interview Questions
---------
------------
PROD
ARCHIVELOG
Whats the GG?
GoldenGate
is a tool primarily used for replicating data between databases,
Its
major advantange is the ability to replicate across heterogenuous databases.
Key capabilities of GG?
-- Real
time replication ( source to target replication within micro seconds)
--
Replication between hetrogenous databases
-- Maintains transaction integrity ( data gets grouped and applied in same manner
in target as in source)
GG benefits ?
High
Availability -
zeo
downtime - upgrade/Migration
reporting
- off load the end user to target system for reporting
Data
integration -transformatino/ETL. TAking data from multiple appliction .
GG deployment topologies?
1.) Unidirectional
single source and target
2.)
Bidirectional replication
source and target both act as S/T T/S
3.)
Broadcast
single source and multiple target ( integration, reporting , DWH)
4.) Intrgration
multiple source and single target
5.) Cascaded
one source - another target- anorther souce
GG processes /Components?
The replication configuration consists of the following processes.
Extract --
reads from the source DB redo logs and writes to a local or remote trail
Pump -- reads from a local trail and sends
the trail file contents to a remote host
Replicate
-- Reads a trail and performs /applies changes in the target database .
Manager -- Starting from 11g GG have manager process across source and target which keeps monitoring rest all process Manager process is part of every GG instance/installation it manages ( start/stop etc) the GG processes
Manager -- Starting from 11g GG have manager process across source and target which keeps monitoring rest all process Manager process is part of every GG instance/installation it manages ( start/stop etc) the GG processes
What’s initial load, change Capture and change
delivery ?
--
Initial load required to setup target db and can be done using db utilities
when source and target are homogeneous
can be
GG when source and target are heterogeneous dbs
change capture
used to SYNC target with changes in source db
changes
in source DB
It uses the following flow
source
DB->Exract->Tril->Extract pump->Remote tail-. Replicate ->Target
DB
Extract
pump is optional
What are trail files ?
trail
files are created by Extract /Extract pump
can
create output as
-flat file
-DB load utility (SQL loadeer,BCP etc)
-GG trail files
canonical file,only format that a replicate
can use
What is check pointing in GG?
check
pointing ensures synchronization in an environment. IN oracle db its redo log
and data files.
GG extract
always writes only committed transaction to the trail
logs ,trails
and target are not in sync, in case any of the process stop/crash, they can
start from the same point they stopped using checkpoint information
Checkpointing-
checkpoint table/file is used to ensure
- data
consistancy
- data integrity
- transaction ordering
- Recovery of process
GG Parameter file?
Two
kind of parameter files
1.)GLOBALS
parameter file- Default setting for all the process
2.)Parameter
files for every process - Extract/ Pump / Replicate
process
creates logifles and Discard files
3) What is the command line utility in GoldenGate (or) what
is ggsci?
ANS: Golden Gate Command Line Interface essential commands –
GGSCI
GGSCI -- (Oracle)
GoldenGate Software Command Interpreter
4) What is the default port for GoldenGate Manager process?
ANS:
7809
5) What are important files GoldenGate?
GLOBALS, ggserr.log, dirprm, etc ...
6) What is checkpoint table?
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known
position in the trail file from where the Replicat process will start
processing after any kind of error or shutdown.
This ensures data integrity and a record of these
checkpoints is either maintained in files stored on disk or table in the
database which is the preferred option.
7) How can you see GoldenGate errors?
ANS:
ggsci> VIEW GGSEVT
ggserr.log file
What transaction types does Goldengate support for
Replication?
Goldengate supports both DML and DDL Replication from the
source to target.
What are the supplemental logging pre-requisites?
The following supplemental logging is required.
Database supplemental logging
Object level logging
Why is Supplemental logging required for Replication?
[sociallocker]When a transaction is committed on the source
database, only new data is written to the Redo log. However for Oracle to apply
these transactions on the destination database, the before image key values are
required to identify the effected rows. This data is also placed in the trail
file and used to identify the rows on the destination, using the key value the
transactions are executed against them.
List important considerations for bi-directional
replication?
The customer should consider the following points in an
active-active replication environment.
Primary Key: Helps to identify conflicts and Resolve them.
Sequences: Are not supported. The work around is use to use
odd/even, range or concatenate sequences.
Triggers: These should be disabled or suppressed to avoid
using uniqueness issue
Data Looping: This can easy avoided using OGG itself
LAG: This should be minimized. If a customer says that there
will not be any LAG due to network or huge load, then we don’t need to deploy
CRDs. But this is not the case always as there would be some LAG and these can
cause Conflicts.
CDR (Conflict Detection & Resolution): OGG has built in
CDRs for all kind of DMLs that can be used to detect and resolve them.
Packaged Application: These are not supported as it may
contain data types which are not support by OGG or it might not allow the
application modification to work with OGG.
Are OGG binaries supported on ASM Cluster File System
(ACFS)?
Yes, you can install and configure OGG on ACFS.
Are OGG binaries supported on the Database File System
(DBFS)? What files can be stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however
store parameter files, data files (trail files), and checkpoint files on DBFS.
What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate
installation directory (OGG HOME)
Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the
Extract, Pump or Replicat level.
Is it a requirement to configure a PUMP extract process in
OGG replication?
A PUMP extract is an option, but it is highly recommended to
use this to safe guard against network failures. Normally it is configured when
you are setting up OGG replication across the network.
What are the differences between the Classic and integrated
Capture?
Classic Capture:
The Classic Capture mode is the traditional Extract process
that accesses the database redo logs (optionally archive logs) to capture the
DML changes occurring on the objects specified in the parameter files.
At the OS level, the GoldenGate user must be a part of the
same database group which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic
Capture mode.
Classic capture can’t read data from the compressed
tables/tablespaces.
[/sociallocker]
Integrated Capture (IC):
In the Integrated Capture mode, GoldenGate works directly
with the database log mining server to receive the data changes in the form of
logical change records (LCRs).
IC mode does not require any special setup for the databases
using ASM, transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in
Version 11.2.0.3 or higher.
It also supports
various object types which were previously not supported by Classic Capture.
This Capture mode supports extracting data from source
databases using compression.
Integrated Capture can be configured in an online or
downstream mode.
List the minimum parameters that can be used to create the
extract process?
The following are the minimium required parameters which
must be defined in the extract parameter file.
EXTRACT NAME
USERID
EXTTRAIL
TABLE
I want to configure multiple extracts to write to the same
exttrail file? Is this possible?
Only one Extract process can write to one exttrail at a
time. So you can’t configure multiple extracts to write to the same exttrail.
What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.
Data Encryption using Blow fish.
Password Encryption.
Network Encryption.
What are the different password encrytion options available
with OGG?
You can encrypt a password in OGG using
Blowfish algorithm and
Advance Encryption Standard (AES) algorithm
What are the different encryption levels in AES?
You can encrypt the password/data using the AES in three
different keys
a) 128 bit
b) 192 bit and
c) 256 bit
Posted 29th May 2015 by chandu kopparapu
Labels: Interview Questions-DBA
GG setup Question
========================================
Rough steps to
enable GG setup for any DB.
Environment Setup
-- A Database user for GG process ( Create the replication
schema users in source and target DB)
- For Extract on
source DB to get metadata
- For replicate on
Target DB to do DMLs
-- To be able to identify the row in the target DB
- Enable Supplement
logging in the source DB
- TRANDATA for
tables to be extracted from
-- Configure manager process
-- Create a source Def/Target Def files
** supplement logging ensures the Primary Key values are
also populated during DMLs into the redo logs, which is required to populate
the right rows on the target during replication
What are the prerequisite for GG setup?
Database
should be running on archive log mode. We should have Primary key or unique key on
table. Recycle
bin feature should be turned off. Supplemental
log should be enabled.
Details steps
1.) Create
a database user for GG like GGUSER with DBA privilege.
[oracle@gg1
gg]$ sqlplus "/ as sysdba"
SQL*Plus:
Release 11.2.0.3.0 Production on Mon Nov 26 10:53:57 2012
Copyright
(c) 1982, 2011, Oracle. All rights reserved.
Connected
to:
Oracle Database
11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With
the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>
create user gguser identified by gguser;
User
created.
SQL>
grant connect,resource,dba to gguser;
Grant
succeeded.
C
2.) Enable Supplement Logging
SQL>
alter database add supplemental log data;
Database
altered.
SQL>
select name,log_mode from v$database;
NAME
LOG_MODE
3.) Edit
GLOBALS file to specify check pointing table
- go to default GG home and
> vi GLOBLAS
checkpointtable GGS_CHEKCPOINT
4.) Manager
installation for both the source and target
> ggsci
GGSCi > edit params mgr
GGSCI > set editor geedit * if you wish to set editor
---
port 15000
PURGEOLDEXTRACTS ./dirdat/* , USECHECKPOINTS
DYNAMICPORTLIST 15010-15020
save and quit
GGSCI > info mgr
Manager is down !
GGSCI > start mgr
GGSCI > Info manager
-- to check error --
GGSCI > exit
]$ cd dirrpt
]$ cat mgr.rpt erorr will be shown here
5.) Enable trandata for tables part of replication
GGSCi> dblogin userid gguser, password
gguser
succesfully loggin to the datbase
GGSCI > add trandata west.account
GGSCI> add trandata west.*
GGSCI > info trandata west
Please
follow the following steps on source server (In our case 192.168.3.***)
· Install
the Golden Gate.
[oracle@gg1 stage]$ unzip
ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip
After unzipping above file it
will create the “fbo_ggs_Linux_x64_ora11g_64bit.tar”
File.
· Untar
the file on /u002/app/gg
1.
2. Connect
to the GG prompt using ggsci command and run the create subdirs this command
will create the multiple subdir on GG home.
[oracle@gg1
gg]$ ./ggsci
Oracle
GoldenGate Command Interpreter for Oracle
Version
11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux,
x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14
Copyright
(C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.
GGSCI
(gg1) 1> create subdirs
Creating
subdirectories under current directory /u002/app/gg
Parameter
files
/u002/app/gg/dirprm: already exists
Report
files
/u002/app/gg/dirrpt: created
Checkpoint
files
/u002/app/gg/dirchk:
created
Process
status files
/u002/app/gg/dirpcs: created
SQL
script
files
/u002/app/gg/dirsql: created
Database
definitions files /u002/app/gg/dirdef: created
Extract
data files
/u002/app/gg/dirdat: created
Temporary
files
/u002/app/gg/dirtmp: created
Stdout
files
/u002/app/gg/dirout: created
GGSCI
(gg1) 2>
For
manager default status will be stopped.
GGSCI
(gg1) 3> info all
Program
Status Group
Lag at Chkpt Time Since Chkpt
MANAGER
STOPPED
3. Set the
GLOBALS and MANAGER file at source and start the manager process..
GGSCI
(gg1) 4> edit params ./GLOBALS
GGSCHEMA
GGUSER
Save
the file.
======================
GGSCI
(gg1) 5> edit param mgr
port
7809
Save
the file.
======================
GGSCI
(gg1) 6> start MANAGER
Manager
started.
GGSCI
(gg1) 7> info all
Program
Status Group
Lag at Chkpt Time Since Chkpt
MANAGER
RUNNING
======================
4. Setup
the extract and pump parameter files.
GGSCI
(gg1) 8> ADD EXTRACT ext1, TRANLOG, BEGIN NOW
EXTRACT
added.
GGSCI
(gg1) 9> ADD EXTTRAIL /u002/app/gg/dirdat/se, EXTRACT EXT1
EXTTRAIL
added.
GGSCI
(gg1) 10> info all
Program
Status Group
Lag at Chkpt Time Since Chkpt
MANAGER
RUNNING
EXTRACT
STOPPED EXT1
00:00:00 00:02:40
GGSCI
(gg1) 11> edit param EXT1
extract
EXT1
userid
gguser, password gguser
exttrail
/u002/app/gg/dirdat/se
discardfile
/u002/app/gg/dirrpt/ext1.dsc, append
ddl
include mapped
TABLE
TEST.*;
Save the file.
GGSCI
(gg1) 12> ADD EXTRACT EPUMP, EXTTRAILSOURCE /u002/app/gg/dirdat/se, begin now
EXTRACT
added.
GGSCI
(gg1) 13> ADD RMTTRAIL /u002/app/gg/dirdat/es, EXTRACT EPUMP
RMTTRAIL
added.
GGSCI
(gg1) 14> edit param EPUMP
EXTRACT
EPUMP
PASSTHRU
RMTHOST
192.168.3.***, MGRPORT 7809
RMTTRAIL
/u002/app/gg/dirdat/es
TABLE
TEST.TEST;.
Save
the file.
GGSCI
(gg1) 15> info all
Program
Status Group
Lag at Chkpt Time Since Chkpt
MANAGER
RUNNING
EXTRACT
STOPPED EPUMP
00:00:00 00:00:15
EXTRACT
STOPPED EXT1
00:00:00 00:24:56
GGSCI
(gg1) 17> start EXT1
Sending
START request to MANAGER ...
REPLICAT
EXT1 starting
GGSCI
(gg1) 18> start EPUMP
Sending
START request to MANAGER ...
REPLICAT
EPUMP starting
GGSCI
(gg1) 5> info all
Program
Status Group
Lag at Chkpt Time Since Chkpt
MANAGER
RUNNING
EXTRACT
RUNNING EPUMP
00:00:00 00:00:08
EXTRACT
RUNNING EXT1
00:00:00 00:00:06
Please
follow the following steps on target server (In our case 192.168.3.***)
No comments:
Post a Comment