Tuesday, September 26, 2017

EBS 11i - compiling jsps, just a little info -> not a valid class_dir directory

We know that, we can compile jsps in EBS 11i manually. (by using perl -x $JTF_TOP/admin/scripts/ojspCompile.pl —compile —quiet)

We also know that, in EBS 11i; we can clear the jsp cache by deleting the _pages directory located in $COMMON_TOP.

However, there is a little important thing that we need to know, while planning to take these 2 actions.

That is, you can't just the clear jsp cache and then directly compile the jsps.

This is because osjpCompile.pl wants the $COMMON_TOP/_pages/_oa__html directory to be present, as it is designed to get this directory as its class_dir.

So, if we clear jsp cache (by running rm -fR $COMMON_TOP/_pages) and then run the osjpCompile.pl immediately, we end up with the following;

identifying apache_top.../TEST/testora/iAS
identifying apache_config_top.../TEST/testora/iAS
identifying java_home.../usr/java/jdk1.6.0_23
identifying jsp_dir.../TEST/testcomn/html
identifying pages_dir.../TEST/testcomn
identifying classpath...file:///TEST/testora/iAS/Apache/Jserv/etc/jserv.properties
"not a valid class_dir directory: (/TEST/testcomn/_pages/_oa__html)"


Well.. As seen above, we need to have the jsp cache to run the osjpCompile.pl.

In order to have our jsp cache back, we start apache and then using our browser; we reach the login page (reaching it once is enough)

After that, we see our  $COMMON_TOP/_pages/_oa__html directory is created. At this point; we can run osjpCompile.pl without any errors.

This was the tip of the day. I hope you will find it useful.

Wednesday, September 20, 2017

Problem installing Oracle FMW 12 - Error - CFGFWK-64254, ONS related error, oracle.jdbc.fanEnabled=false

Today, I was doing a 2 node Forms & Reports 12.2.1.3 Cluster on Solaris 11.3 Sparc 64 bit. and during the config.sh run, I encountered "CFGFWK-64254 error during OPSS Processing" phase execution.
The underlying error was "java.lang.IllegalArgumentException: ONS configuration failed"..
It was clearly related with RDBMS ONS. (Oracle Notification Service), but the database environment where I created the RCU schemas(forms and reports schemas), was a single node db environment and it was not configured with ONS.
So the error was unexpected and probably it was a bug. It was not documented and it motivated me for finding the fix.
The installer of Forms 12.2.1.3 ( or lets FMW) was, however; wanted to use ONS and it insisted on it..
In the previous config.sh screens, I actually did find a workaround for it.. That is, I could use the FAN related argument in those screens as those screens had textboxes for supplying java arguments.. (oracle.jdbc.fanEnabled=false)

However, when you fill all the config.sh installation forms and press the button "create", you can not use this workaround as there is nowhere to supply this java argument and you ended up with these ONS related errors.

The workaround ( in my opinion, it is a fix / it  is a patch) that for this is, to supply this argument in the config_internal.sh. (config.sh indirectly executes config_internal.sh)

What I did was to modify the config_internal.sh to include -Doracle.jdbc.fanEnabled=false
Ofcourse, I wrote it in the right place/line in that script and make the java use it.
This fixed the problem.
Tested and verified. :)

Monday, September 18, 2017

EBS 12.2 -- NFS-based Shared Application Filesystem -- What to do when the first node is down?

I already wrote a blog post about an important point to be considered when building a Shared Application Filesystem using NFS. (http://ermanarslan.blogspot.com.tr/2017/08/ebs-122-important-point-to-be.html)
This point should be considered especially, when we export the NFS shares from the first apps node and mount them from the second node.  (as instructed in Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2 (Doc ID 1375769.1) )

That is, in such a multi node shared application filesystem configuration; when our 1st node where the NFS mounts are hosted, is down, our EBS apps tier services gets down. 
This is an expected behaviour. It is caused by the first node being a single point of failure.So, if it goes down, the NFS shares go with it. 

However, we should be able to start our EBS apps tier services on the surviving nodes, right? 
This is an important thing, because the problem in the first node may not be resolved quickly.. 

Well. Here is the things that we should do to start the EBS apps tier services on the second apps node ,in such a scenario  ->

Note : these steps are for NFS-based shared application filesystem.

1) Map the apps luns to the second(surviving) node: This is a storage and OS tier operation. The luns that apps filesystem resides should be mapped to and mounted on the second node. 

2) Update the second node's apps tier context file and run autoconfig on the secondary apps node. 
There are 3 context value updates are neccessary : s_webhost, s_login_page and s_external_url.. This is because, these context file attributes is set to appstier1 by default.. "However, if we already implemented the Load Balancer configuration, then this means that these updates are already done and there is no need to do anyting in this step".

s_webentryhost  : appstier2
s_login_page : http://appstier2.company.com:8050/OA_HTML/AppsLogin on Application Server 2
s_external_url : http://appstier2.company.com:8050/

Note: modify the above apps node name (appstier) according to your second apps node's hostname..

3) Start the apps tier services using adstrtal.sh , but using the msimode argument.
($ADMIN_SCRIPTS_HOME/adstrtal.sh -msimode)

msi means managed server independence.. As the first node is down, our Admin server is down, so the managed servers (like oacore) can not be started on the second node unless using the msimode argument.. 
Without the msimode, managed servers will try to reach the admin server for reading their configuration and they will fail.. Without msimode, we see errors like "ERROR: Skipping startup of forms_server2 since the AdminServer is down", while executing the adstrtal.sh.

Here is the defition of msi mode (from Oracle):
When a Managed Server starts, it tries to contact the Administration Server to retrieve its configuration information. If a Managed Server cannot connect to the Administration Server during startup, it can retrieve its configuration by reading configuration and security files directly. A Managed Server that starts in this way is running in Managed Server Independence (MSI) mode.

Well..  As you see, in a NFS-based Shared application filesystem env, there are max 3 things to do for starting the apps tier services on the second node (supposing the first is crashed, down)

I tested this approach and it took me 15 minutes to complete.. Ofcourse it is dependent on the storage mapping and bunch of other factors but, it is certain that, there is a downtime there..

That's why , I recommend using non-shared APPL_TOP or shared APPL_TOP with ACFS filesystem or shared APPL_TOP with NFS shares that are coming directly from the storage :)

Thursday, September 14, 2017

ODA X6-2M -- virt-manager display problem/garbage characters // yum install dejavu-lgc-sans-fonts

This is an important little piece of information.
This is actually about Linux KVM (Kernel Based Virtual Machine), but as I 'm dealing with Oracle, I'm looking to it from Oracle perspective.
Yes.. The new ODA X6-2M, as you may already know, gives us the option to use the Linux KVM for enabling the virtualization.
This new KVM thing (it is new from Oracle perspective) has a GUI to manage the VM environment. It is management interface that eases the administration of the KVM environment (in ODA or in anywhere else)
It is called  Virtual Machine Manager and it is executed using the command virt-manager (using root).
As it is a GUI, it needs a X environment to run it.
In Oracle Linux world, as you may also agree, we mostly use vncserver for displaying the X screens remotely.
So, we connect to the vncserver (or we can use ILOM remote connection or anything that does the same thing) and execute the virt-manager to start the Virtual Machine Manager for KVM.
The issue starts here.
After the deployment of ODA and enabling the KVM, we run the virt-manager command and we see the garbage characters.
We actually see little squares rather than the characters and fonts.
Here is an example:

So, in order to fix this, we basically need to install the fonts that Virtual Machine Manager needs.
A simply yum command can do this work and this little piece of information may save you time :)

Fix: yum install dejavu-lgc-sans-fonts
Tested & Verified in the following ODA X6-2M environment :

System Version
---------------
12.1.2.11.0

Component                         Installed Version    Available Version  
---------------------------------------- -------------------- --------------------
OAK                                    12.1.2.11.0               up-to-date        
GI                                        12.1.0.2.170418       up-to-date        
DB                                       11.2.0.4.170418       up-to-date        
ILOM                                  3.2.7.26.a.r112632   3.2.9.23.r116695  
BIOS                                   38050100                 38070200          
OS                                        6.8                           up-to-date     

EBS/RAC -- setting TNS_ADMIN for srvctl , SQLNET.sqlnet.allowed_logon* , no matching protocol erros.

This post is actually a weird one :) , but I find it useful.
It is like a mix , as it is about EBS , it is about RAC, it is about SQLNET.sqlnet.allowed_logon* parameters, it is about IFILE settings and  it is about srvctl..
I m writing this one, because I was in the field dealing with the similar issues in almost every EBS-Exadata migrations.
In this post, I won't give all the instructions and definitions releated with the thing that I want to explain. That is, I will suppose that the readers of this post already know the following:
What local listener does, what RAC means, what the "srvctl" utility is , what the sqlnet.ora and the TNS_ADMIN env variable do, and ofcourse what the parameters SQLNET.sqlnet.allowed_logon* are used for.

Let's jump into our topic.
As you may know, we have the autoconfig utility in EBS environments.
This autoconfig utility regenerates some certain db related files, when it is run on the db tier.
Today, I will concantrate on sqlnet.ora.
My followers may recognize this from my earlier posts, but today, I 'm writing about something different, actually.
We know autoconfig regenerates sqlnet.ora in $ORACLE_HOME/network/admin/<context_name> directory and we know that we will lost the things that we manually write there. (after a dbtier autoconfig).
That's why, we know (it is also documented), we need to use IFILES.
So far so good.
What we also know is, in RAC we use the listeners that are running from the GRID home.
This is not a must but it is a recommended thing.
So our local listeners are  running from the GRID homes and we use IFILES in our TNS configuration files stored in GRID Homes and make Oracle to see the actual sqlnet.ora files that are maintained in our EBS RDBMS homes.
At the end of the day, we make Oracle to see what is stored in the IFILE. The actual tns configuration files which are maintained in the RDBMS home, right in the directory:"$ORACLE_HOME/network/admin/<context_name>".
Note that, these files are not regenerated by autoconfig so it is safe.

Well. For sqlnet.ora and other files, this story applies.
However; we need one more thing to do, if we are running our database in a RAC environment.
That is the cluster registration.
I mean we want our TNS_ADMIN directory to point to the GRID Home.
We want the following actually..
TNS_ADMIN -> GRID_HOME/network/admin ->  RDBMS_HOME/context_name/sqlnet.ora -> RDBMS_HOME/context_name/sqlnet_ifile.

In order to get this, we include the IFILE setting in GRID_HOME/sqlnet.ora , which points to the RDBMS_HOME/context_name.
the sqlnet.ora in RDBMS_HOME/context_name has  the IFILE setting for pointing the sqlnet_ifile stored in RDBMS_HOME/context_name. (this comes  by default.)

So far so good.
But at this point, we need to set the TNS_ADMIN right?
We can set it in our terminal using export TNS_ADMIN, but it will not work for srvctl.
In RAC, we mostly use srvctl to start listeners and databases.

So, we execute srvctl setenv to set this TNS_ADMIN environment variable.
This way, we make srvctl utility be aware of our TNS_ADMIN setting.

But for what will we set the TNS_ADMIN env variable? For listener or for database?
At the first glance , we may think that we must set it for the listener, for the local one... for the local one, using a command like srvctl setenv listener -l listener_name -T TNS_ADMIN=$GRID_HOME/network/admin

"However, we must actually set it for db." --Actually this info made me writing this post :)"

This is because; if we set the TNS_ADMIN for listener , then it is overwritten.

Here is the info from Oracle Support:
Ref: Dynamic Registration and TNS_ADMIN (Doc ID 181129.1)

At instance startup, PMON picks up the TNS_ADMIN environment variable (in  the same way that the listener does in Section (a) above). When PMON subsequently registers this instance, this value of TNS_ADMIN is passed to the listener; causing PMON's TNS_ADMIN value to overwrite the value the listener currently has.
If TNS_ADMIN is not set when PMON starts, then after registration, the listener's TNS_ADMIN value is cleared (ie, behaves as if not set).

So, this is for those who are trying to fix the "no matching protocol errors" -> you should make the sqlnet allowed_logon_* settings in the sqlnet_ifile that is stored in RDBMS_HOME/network/admin/context_name and then  set the TNS_ADMIN using a command like "srvctl setenv database -d DB_NAME -T TNS_ADMIN=blabla"... (not the listener)

Well. This is the tip of the day :)  Take care.

Saturday, August 26, 2017

EBS 12.2- An important point to be considered when building a Shared Application Filesystem using NFS

Nowadays, I come across Multi-AppsNode EBS installations, which are based on Shared Application Filesystem technology. I see mostly NFS is utilized rather OCFS or any other shared filesystem and that's why I wanted to write a few words about it.

First, shared Appl_top using NFS is completely supported and documented.  The Oracle EBS system supports shared Application Filesystem terminology starting from application version 11i, and it’s supported until 12.2. It is easy to build and use.
Ofcourse, there are some points to be considered :

  • The sharing of APPL_TOP in a Windows environment is not supported.
  • INST_TOP also will be in shared locations unlike earlier releases.
  • All application tiers sharing the file system should be running on the same operating system.
  • Shared application tier file systems should be in read-write mode on all application tiers.

In addition to the points above; the most point of all is I think, the high availability.

I mean, when we build a shared Application Filesystem using NFS, we actually become dependent on NFS shares. (NFS shared + primary node OS + primary node Hardware = primary node's availability)
That is, we export NFS shares from the primary apps node to the secondary apps node.
So, if the primary apps node is down (as a result of a system failure) , our EBS application services on both nodes go down.
We can't start them on the surviving nodes, unless we do a quick failover operation in the Storage level.
That is , in case of a failure that can happen in the primary apps node, the storage admin should map the disks to the secondary node and in that secondary node, the disks should be mounted.
By doing so, the secondary node can read-write the Apps filesystem directly (without a need to mount the NFS shares exported by the primary node -- primary is down..).
Ofcourse, a revert-back operation should be planned to be done in the Storage and OS levels in order to revert back changes when the primary will be up.

So, for load balancing, it is okay, but for high availability, this configuration named "Shared Application Filesystem using NFS" is a little inadequate. I mean, there are things to be done and there is downtime for this.

Well... If you need  both the "High availability" and "Load Balancing" on EBS Apps Tier, my recommendation is, "using a non-shared Application Filesystem, where the apps nodes may have their own dedicated application file systems." (you can also consider OCFS and shared Application filesystem)

You can implement the non-shared Application Filesystem based multi Node configuration by doing a single apps tier installation and then adding a node.

Here is a document to followed ->
"Cloning Oracle Applications Release 12 with Rapid Clone (Doc ID 406982.1) "Option 3: Adding a New Node to an Existing System"

Don't bother the name of the document above.. It includes Cloning, yes.. But, what we do for building a non-shared filesystem based multi node EBS apps tier is the same as the things documented in that node. Specifically, the section named Option 3: Adding a New Node to an Existing System".

Wednesday, August 23, 2017

ODA X6-2 -- an important note about Migrating 11.2.0.4 databases to ODA X6

A few days ago, a follower of mine reported an issue using my forum.(http://ermanarslan.blogspot.com.tr/p/forum.html)

He was complaining about an error encountered while he was trying to restore a 11.2.0.4 database to ODA X6-2.
As ODA X6-2 supports both 11.2.0.4 and 12C databases, this issue seemed as not expected, in the first glance.

However; when I saw the error reported by my follower, I understood that there was a missing info about this restore thing.

Here, the follower said, they were trying to restore their 11.2.0.4 database to ODA X6, but they were getting "ORA-15204: database version 11.2.0.4.0 is incompatible with diskgroup DATA"

The whole error stack was as follows;

ORA-17502: ksfdcre:3 Failed to create file +DATA/PROD/datafile/a_txn_data04.dbf
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-15204: database version 11.2.0.4.0 is incompatible with diskgroup DATA

Well.. When I see the stack, I saw that , the restore operation was trying to be done directly on ASM.
As Grid deployed with ODA X6 is 12C and as the database compatabilities of the ASM diskgroups that come with the ODA deployment, are 12C; the error, that the follower was getting, was actually normal and expected.!

"That is, if you want to restore a 11.2.0.4 database to ODA, you don't use ASM diskgroups directly..
So you use ACFS filesystem rather than ASM diskgroups.."
ODA's Appliance Manager does it with the same way.

Here is what I suggested to my follower:
  • Use ACFS in ODA X6-2. 
  • Create a dummy 11.2.0.4 database from Appliance Manager .. This action will create an ACFS filesystem for you. 
  • Delete that dummy database using standard approaches and use the created ACFS filesystem for your restore. 
Why did you want him to create a dummy database using Appliance Manager?
Because , I want his restored database to be seen in the Appliance Manager.. I wanted him to have a clean environment :)

Well. This is the tip of the day, but it is important right?
Anyways, use ACFS and know the following;

  • ACFS uses ASM in the backend. It is an filesystem built on top of ASM. 
  • ACFS is much more useful than ASM. 
  • For instance; ASM has a limit of 1 million files per Diskgroup. ACFS, on the other hand; supports 2^40 (1 trillion) files in a file system. 
  • ACFS has also the snapshot ability. 
  • Oracle uses ACFS in its new Enginereed Systems, but still it requires ASM. ACFS uses it. So nothing is useless in engineered systems :) 

Monday, August 21, 2017

RDBMS -- a useful Datapump tip, Attaching to a running Job

Sometimes, during the database migrations especially, during the big-sized database migrations, we may need to change some of the configurations of our running datapump jobs.
While, stopping/cancelling the job and executing the datapump again, can be a solution for this, it can be costly in some environments where a full database export/import takes several hours.

In these kinds of situations, the datapump argument named attach comes to our help.
I see a need to write this, because we used it recently to increase the parallelism count of a running datapump export job and it performed well.

This attach feature is also useful when we start a job at one location and need to check on it at a later time from a different location.

Anyways, let's take a look at what we did for increasing the parallelism of our export job during an Exadata migration.

In the first glance; we predicted the optimal setting for the datapump parallelism as 8 but when we saw that the machine (in this case it was an Exadata x6-2 1/8) had plenty of I/O and CPU resources eventhough the 8 parallel expdp threads were running, we increased the parallelism to 24 and we did it using attach argument, without terminating and re-executing our datapump command.

It was as simple as follows;
  • We found our jobs using "select * from dba_datapump_jobs; "
  • We attached to our running job using "expdp usert/pass attach=job_name"
  • We increased the parallelism to 24 using "export> parallel=24"
It is pretty handy, isn't it? Actually, we can do a lot more using this attach argument.. Here is the list of the activities, that we can do when we attach a running datapump job. (along with their definitions)
  • ADD_FILE: Add additional dump files.
  • CONTINUE_CLIENT: Exit interactive mode and enter logging mode.
  • EXIT_CLIENT: Stop the export client session, but leave the job running.
  • FILESIZE: Redefine the default size to be used for any subsequent dump files.
  • HELP: Display a summary of available commands.
  • KILL_JOB: Detach all currently attached client sessions and terminate the current job.
  • PARALLEL: Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 11g.
  • START_JOB: Restart a stopped job to which you are attached.
  • STATUS: Display detailed status for the current job and/or set status interval.
  • STOP_JOB:Stop the current job for later restart.

Wednesday, August 16, 2017

RDBMS-- FLASHBACK FEATURE -- a DEMO , Guranteed Restore Points with or without Flashback Logging + Prereqs and Restrictions

Today, I will give you some key information about Oracle's Flashback technology. So, this post will be completely related with the database tier.
Flashback is not a new technology, I know.. It was introcuded in Oracle Database 10.1, however; it still saves our days, sometimes.

It has also some important restrictions and basically it has two modes. 1) the things we can do when the flashback logging is enabled 2) the things we can do when flashback logging is disabled.
Also, the disk consumption and the performance overhead changes according to the way we choose to use the flashback database feature.

The thing that made me write this post is, an upcoming upgrade operation (applications upgrade) in a mission critical Oracle RAC database.

The database size is too huge, so it not efficient to back up this database just before the upgrade ( both from rman perspective and storage perspective).
In addition, even if we can create a backup, the restore time is estimated to be very long and it can not be accepted by the business.

So in this situation, what we recommend? The flashback database feature.
The business and application guys just want to have the ability to restore the database to a specific point before the upgrade.. (in case of a failure)
So in this situation, what we recommend? A guranteed restore point "without" enabling the flashback logging. (Note that, flashback logging is enabled by the command -> alter database flashback on)

These recommendations are caused by the following facts;
  • Flasback database feature is not only used for traditional recovery, but it can also be used when we need quick recovery. (scenarios like  database upgrades, application deployments and testing  when test databases must be quickly created and re-created)
  • A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled.
  • Flashback Database is much faster than point-in-time recovery because it does not require restoring datafiles from backup and requires applying fewer changes from the archived redo logs.
I will demostrate the things that I just mention above, but first; let's see the prereqs and restrictions of the Flashback feature, as well as,  those 2 different configurations that can be made for getting benefit the Flashback feature -- "GURANTEED RESTORE POINT without Enabling Flashback logging" vs "FLASHBACK LOGGING (or GURANTEED RESTORE POINT  with Flashback logging)" .
( I categorized them as 2, but they can be categorized into more categories if desired/needed)

PREREQS, RESTRICTIONS AND THE 2 TYPES OF USAGE OF FLASHBACK FEATURE

Let's start with the prerequisites.. I will just give a quick list of prereqs here..
  • The database should be in archivelog mode.
  • We must use the FRA. (db_recovery_file_dest and db_recovery_file_dest_size).. Because Flashback logs (.flb files) are createdt here.
  • If needed, flashback logging must be enabled using "alter database flashback on" command , when the database is in mount mode.
  • Our database must be Enterprise Edition.
Okay.. Let's continue with the Restrictions part;
  • Flashback feature can not be used with the Standard Edition Oracle Databases.
I mean, flashback logs can be created  (tested with a guaranteed restore point), but we just can't restore our database to the restore points..

SQL> flashback database to restore point ERMAN_TEST;
flashback database to restore point ERMAN_TEST
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database

Similarly, if we try to enable flashback logging  on a Standard Edition Oracle Database, we end up with this ->

In mount mode;
SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database
  • If we don't enable flashback logging, then the first guaranteed restore point must be created in the mount state. Else, we get the following error;
SQL>CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'ERMAN_TEST'.
ORA-38787: Creating the first guaranteed restore point requires mount mode when
flashback database is off.

Lastly, there is a list of restrictions, that I want to give;
  • We cannot flash back to an SCN ahead of the current SCN
  • When we restore using flashack, or database must be opened with RESETLOGS
  • We just can not use Flashback Database to undo a shrink data file operation. (shrinking a data file or dropping a tablespace can prevent flashing back the affected data files)
  • If we use Flashback Database to flashback our database to a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation.
  • In order to flashback the database, it must be in mount state, else we get the following error -> ORA-38757: Database must be mounted and not open to FLASHBACK.

GURANTEED RESTORE POINT without Enabling Flashback logging vs FLASHBACK LOGGING (or GURANTEED RESTORE POINT with Enabling Flashback logging) :

In this subsection, I will give you the difference between a "flashback logging enabled Oracle Database" and "flashback logging disabled Oracle Database"

First of all, we can use the guaranteed restore point in any mode , both when flashback logging is enabled and when flashback logging is disabled.

The main difference is, without flashback logging, the modified blocks are only saved one time.

So, with flashback logging, modified blocks are saved every time, in every modification. (note that, flashback logging is enabled using "alter database flashback on")

This is because, if we enable flashback logging to be able to restore any SCN. (according to our flashback retention).

But if we don't enable flashback logging and create a guarenteed restore point, we can only restore to the SCN of the time that we created that guaranteed restore point.

Consider a scenario where the business and application guys want to do an upgrade and they want us to have the ability to restore the database to a specific point before the upgrade (in case their upgrade fails), then we use guaranteed restore point without flashback logging. (ofcourse we can also enable flashback logging, but it is unnecessary in this case)

But, if the business and application guys want us to have the ability to restore our database to any point of time between the start time of the upgrade and the end time of the upgrade, we enable flashback logging and create one or more guaranteed restore points and configure our flashback retention policies.

One last important info before continuing;

Each update will generate REDO and UNDO and each block in the UNDO tablespace that is used for the first time, will require flashback data to be written.  So with this in mind, we can say that; if our database is 100 Gb then potentially, the flashback data where only a guaranteed restore point is used, should be at most 100 Gb.

DEMO:

Let's make a demo and see if it is working :)

In this DEMO, I m doing the things below;
  • Create a table named T.
  • Load 100000 row into table T using a loop.
  • Create a Guaranteed restore point in mount mode.
  • Update those 100000 rows using a loop
  • Restore the database to the Guaranteed restore point
  • Drop te restore point.
I'm creating the restore point just before the table update, because I want to get back to the point where the update is not executed yet. 
  • First, I start my database and create the table T.
SQL> startup
ORACLE instance started.
Database mounted.
Database opened.

SQL> create table t ( x int, y char(50) );

Table created.
  • Then I load 100000 rows into the table T.
SQL> begin
for i in 1 .. 100000
loop
insert into t values ( i, 'x' );
end loop;
commit;
end;
/

PL/SQL procedure successfully completed.
SQL> exit
  • I check my FRA to see if there is any flashback logs created at this point. As I didn't enable Flashback logging and as I didn't create any Restore point yet, I don't expect to see any flashback logs there..
[oracle@prelive flashback]$ ls -lrth
total 0
  • Then I put my database into mount mode and  create my guaranteed restore point as follows;(remember : Creating the first guaranteed restore point requires mount mode when flashback database is off)
SQL> shu immediaTE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE;
Restore point created.
  • Now I check to see any flasback logs are created and I see a file is initialized..
SQL> !ls -lrt
total 15268
-rw-rw---- 1 oracle dba 15613952 Aug 15 15:11 o1_mf_ds5s89wn_.flb
  • Next, I update all the rows in table T as follows.. I'm doing this to show you the generated flashback logs as a result of this update. 
SQL> alter database open; (remember, my db was in mount mode)
Database altered.

SQL> begin
for i in 1 .. 100000
loop
update t set y='y' where x=i;
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
  • Now I check the generated flashback logs both from DB and from the filesystem.
SQL> select flashback_size/1024/1024 MB from v$flashback_database_log;

MB
----------
44.6484375

SQL> !ls -lrth
total 45M
-rw-rw---- 1 oracle dba 15M Aug 15 15:15 o1_mf_ds5s89wn_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 15:19 o1_mf_ds5shl52_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 16:12 o1_mf_ds5spgkt_.flb

As you see 45 MB flashback log is created .. (table size was approxiametly 10-15 GB) .. This is because of  modified blocks + Undo data..
  • Now, I put my database to the mount mode and flashback my database to my guaranteed restore point.
SQL> shu immediatE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> flashback database to restore point ERMAN_TEST;

Flashback complete.
  • I successfully issued my flashback command and now, it is time to open the database, but database must be opened using the resetlogs.. (Else , I get "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" )
SQL> alter database open resetlogs;
Database altered.
  • Now I check table T and see if my update is reverted back.
SQL> select * from t where y='y';
no rows selected

Yes. As if it never happened :)

  • Lastly, I drop my restore point and see the relevant flashback logs are cleared from Flash Recovery Area.
SQL> drop restore point ERMAN_TEST;
Restore point dropped.

SQL> exit

[oracle@prelive flashback]$ ls -lrt
total 0 ---CLEAN!

That's all :) Nice feature and it is working like charm :)

Saturday, July 22, 2017

Book: Practical Oracle E-Business Suite, last package delivered :)

Today's blog post is not technical :)
This post is book-related and it will be short.

As you know, our book, Practical Oracle E-Business Suite has been published for almost a year.

I  really appreciate the feedbacks and increasing interest in this book.

I want my followers and my readers to know that, in addition to my desire for writing, these feedbacks and the recognition are my best motivations for writing.

Actually, these are my new motivations for making continous contributions to the Oracle Community.

After the publication, I had my 10 copies that Apress sent to me. As of yesterday, I delivered the last copy that I have left.

Once again,  many thanks to all the people who directly or indirectly, consciously or unconsciously, have helped me to arrive today.