Friday, September 23, 2016

EBS 12.2 - cloned instanced can not show images / Connection refused: proxy: HTTP: attempt to connect to / GET /OA_MEDIA/oracle_white_logo.png HTTP/1.1" 503

After cloning an EBS 12.2 environment, which has multiple managed servers for certain services, you may face with a image rendering problem.

The problem actually can be encountered after disabling some of the managed servers in a newly clone environments. (Generally, we don't have much load in test/clone environments, so after cloning we disable some of the managed servers  in these environment which are configured to run in parallel in Production environments)

The issue that I m talking about is something like the following image rendering and layout problem, which is shown in the example screenshot below;

The issue depicted in above screenshot was on a newly cloned environment. The customer was not sure when it was started. But following were the changes that are done in this environment;

-disabled the SSL.
-some of the oacore servers in this newly cloned environment.
-applied ATG TXK and AD Delta patches (using online patching cycle)

SSL may be the problem as some of the profiles like APPS_FRAMEWORK_AGENT was still pointing to the https url in the site level.
So I disabled the SSL once again, properly, but the issue have continued.

After analyzing the Oracle HTTP Server logs; these were the findings;

HTTP Server log : [2016-09-22T19:06:33.0646+03:00] [OHS] [ERROR:32] [OHS-9999] [core.c][host_id: servername_deleted_for_security_reasons] [host_addr:] [tid: 139926295234304] [user: applmgr] [ecid: 005FHfORQun7q2w5OF0Fyd0006jJ000005] [rid: 0] [VirtualHost: main] (111)Connection refused: proxy: HTTP: attempt to connect to (servername_deleted_for_security_reasonsl) failed

HTTP Server access log: - - [22/Sep/2016:19:06:33 +0300] "GET /OA_MEDIA/oracle_white_logo.png HTTP/1.1" 503 306

So, the HTTP server was trying to reach the port 7201 ( which was the current patch edition port). So there should be a reference left , as the oacore server currently was listening on 7202.
This could be caused by a problematic managed server remove operation or a problem in the cutover phase which was done for applying the AD and TXK delta patches.

The fix that I applied was for deleting the unnecessary managed server references for oacore and it worked.

The solution: perl $FND_TOP/patch/115/bin/ -contextfile=$CONTEXT_FILE -configoption=removeMS -oacore=<host_name>:7201

EBS // After migrating EBS database to RAC, MTL_ONLINE_TRANSACTION_PUB.process_online returns false // no manager

You may encounter weird erros on workflow and concurrent programs like interface trip stop, after migrating EBS database to a RAC environment running on traditional servers or on Exadata.
These problems are encountered when you enabled  load balancing database connections for Concurrent Managers (s_cp_twotask) -- not related with Parallel Concurrent Processing.

These types of problems are due having application tier processing which need to be on a single database node, to span on multiple database nodes.
Thanks to EBS, we have profile("Concurrent:TM Transport Type") to workaround this situation, as per explained in one of my previous blog post,  which was about an error in "Wf_engine_Util.Function_Call".

Same fix, still applies with the Interface Trip Stop.
As for the solution, We set Concurrent:TM Transport Type to "QUEUE" ,as normally Transaction Manager uses DBMS PIPEs (When, Concurrent TM Transport Type is set to "PIPE") to communicate with with the Service Manager, and any user process initiated by Forms, or Standard Manager requests. DBMS_PIPE is by design requires the sessions which there is a need for establising a communication between eachother , to be in the same instance.As, DBMS_PIPE is a package that lets the sessions in the same instance to communicate using Oracle Pipes, using them in a RAC environment is not acceptable. That's why we set Concurrent:TM Transport Type to "QUEUE" in RAC environments unless we have multiple application nodes which have concurrent manager services dedicated to their own database instances.

Anyways, lets give the problem and the solution;
The related error in interface trip stop is as follows;

InterfaceTripStop: processing stop_id 4656255 for INV OM DSNO
no. of OE messages :
MTL_ONLINE_TRANSACTION_PUB.process_online returns false
Error Code:İşlem işlemcisi hatası
Error Explanation:Bu talebi işlemek üzere bir eşzamanlı yönetici tanımlanmadığından talep işlenemiyor.

This error message is Turkish :), but it just means not concurrent manager and blabla...

Action Plan:
  1. Stop Concurrent managers and workflow service components
  2. set the Concurrent:TM Transport Type to "QUEUE",
  3. Restart apps tier + db (if possible)
It is also good to recreate the Concurrent Manager views using  FNDLIBR "FND" "FNDCPBWV" apps/<passwd> " SYSADMIN" "System Administrator" "SYSADMIN" (just in case)

    Tuesday, September 20, 2016

    EBS 12.2.6 released!

    The public announcement was done on Sep 15, 2016 and EBS 12.2.6 is released!

    You can reach the readme of it through the  Oracle Support document: Oracle E-Business Suite Release 12.2.6 Readme (Doc ID 2114016.1)

    Like the 12.2.5 , EBS 12.2.6 is an online patch (Patch 21900901), however it is also installable using the adop's downtime option (apply_mode=downtime)
    If you are already using EBS 12.2.2,12.2.3,12.2.4 or 12.2.5, then you apply the patch online by executing online patching cycle. On the other hand, if you are using an old version of EBS (EBS 11i, 12.0 or 12.1) or if you are doing a new EBS 12.2 install, then you are good to go with the downtime option.

    Note that, EBS 12.2.6 requires the EBS database to be at least an
    Note that, Oracle E-Business Suite 12.2.6 Release Update Pack requires Fusion Middleware Technology Stack (FMW) (11gR1 PS6) or higher.

    As for the upgrades;
    EBS 11i customers should first upgrade to 12.2 before applying 12.2.6.
    EBS 12.0 and 12.1 customers should first upgrade to 12.2 before applying 12.2.6.
    EBS 12.2 customers can directly apply 12.2.6.

    Lastly, it is important to mention that, EBS 12.2.6 brings the new Functional Capabilities Across Oracle E-Business Suite, Modern User Experience and mobility and Operation Efficiency.
    check out the official announcement and highlights video to get an overview about the new things in EBS 12.2.6;

    official announcement:
    highlights video:,objectId:14783

    Monday, September 12, 2016

    EXADATA -- EBS on EXADATA, migration, 2 tips about parallelization and the listeners + 1 fix for the scan listener registration problem

    Exadata is poweful , it is said to be the World's fastest database machine and it is stable.
    However, it is an like instrument (Oracle database is also like an instrument) and you need to know how to play it.

    I have been doing Exadata migrations since 2012, and I was the one who first implemented EBS R12 on Exadata in Turkey. Since then, I hve migrated several EBS environment to Exadata X2, X3, X4 and X5. The migrations have become my routine. Altough I have a team, consisting of 5 people, I have always wanted to do these migrations by myself, as it was an always ivolving routine and in any new migration, there was something to learn and some optimizations to be done.
    So here I am and want yo to give 2 tips regarding EBS on Exadata implementation.
    Recently migrated an EBS 11i to Exadata X6. The EBS in question was a critical production system, which have lots of consumers around.
    The POC and TEST was perfectly done, no issues were there at all.
    Everything was running as expected and everything was configured as requested by the customer.
    However, during the PROD migration, the customer wanted 2 new things.
    1) They wanted to have a more optimized parallelization configuration in the EBS PROD database which reside on Exadata
    2) They said that , they want Exadata to be a consolidated environment , where they will host lots of EBS databases including Test, Dev, Uat and more.


    So, regarding the parallelization, I have implemented auto dop. The Exadata was an X6 1/8 and the RDBMS software on it was Oracle Database 12c. So it was the time to trust Oracle a little bit so , I have implemented auto degree policy and let Oracle decide the parallel degrees. Still, I needed to limit the parallelism as I didn't want the server resource to be consumed suddenly.

    So, I configured the parallel parameters for this job as follows;
    --Remember it is an Exadata X6 1/8 and these setting may be changed according to your environments and needs.

    SQL> show parameter parallel

    parallel_degree_limit                string      CPU
    parallel_degree_policy               string      AUTO
    parallel_execution_message_size      integer     16384
    parallel_force_local                 boolean     TRUE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean     FALSE
    parallel_max_servers                 integer     480
    parallel_min_percent                 integer     0
    parallel_min_servers                 integer     32
    parallel_min_time_threshold          string      AUTO
    parallel_server                      boolean     TRUE
    parallel_server_instances            integer     2
    parallel_servers_target              integer     320
    parallel_threads_per_cpu             integer     1

    So , in brief what I have configured and say Oracle  is,
    you can have maximum 480 parallel servers. You should not allocate lots of parallel servers for a single query and you should decide the parallel degrees automatically(auto dop). Also,  you should run the parallel statements in the same node.(dont scatter the parallel servers for a query accross nodes-- this is an EBS requirement),
    Also, I say; you activate parallel statement queuing feature when parallel server count become 320.
    So what I actually say is , if 320 of your parallel servers are allocated, and if a new query requires 200 parallel servers, then you queue it till the 200 parallel servers are available. (320+200=520 exceeds the parallel_max_servers)

    Let's explain how auto dop works a little bit, as it is the most important thing in this parallelization configuration;

    When Oracle estimates the time for a query to be less than the PARALLEL_MIN_TIME_THRESHOLD, it runs the query serially.
    If the estimated time is higher than the PARALEL_MIN_TIME_THRESHOLD, then the Oracle (12c) looks to CPU and IO costs and decide the parallel degree accordingly.

    At this time, to prevent a single query to allocate lots of parallel processes, we set the PARALLEL_DEGREE_LIMIT.

    PARALLEL_DEGREE_LIMIT by default is set to CPU. This setting means; "maximum DOP is limited by the DEFAULT DOP"

    DEFAULT DOP is calculated as follows;
    PARALLEL_THREADS_PER_CPU * SUM(CPU_COUNT across all cluster nodes) . For ex: 1*44=44

    To get te optimized value, Oracle uses this ACTUAL DOP = MIN(IDEAL DOP, PARALLEL_DEGREE_LIMIT)

    So when Oracle decides the IDEAL DOP for AUTO DOP, it uses the formula and method above, but it also uses the values stored in resource_io_calibrate$ .

    Higher value for the MAX_PMBPS (maximum megabytes per second) in this table -> lower value for IDEAL DOP.

    For example: the following delete-insert makes Oracle to decide a lower IDEAL DOP.
    Note: wer se Max_PMBPS as 200 MBs. Because of this the IDEAL OP will be lower  and as ACTUAL DOP=  MIN(IDEAL DOP, PARALLEL_DEGREE_LIMIT), Oracle will decide on low Parallel degrees. 

    delete from resource_io_calibrate$;
    insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);

    However, if we set the MAX_PMBS to a lower value, like 20 MB shown in below example, Oracle will decided a higher IDEAL DOP.

    delete from resource_io_calibrate$;
    insert into resource_io_calibrate$
    values(current_timestamp, current_timestamp, 0, 0, 20, 0, 0);

    Well, the parameters that I gave above can be changed according to your environment. I only write this blog post for acknowledgement. But you get the idea right?
    The idea is ; use Auto dop, set the parallel parameters in an optimized way and limit the unnecessary parallel server allocation.
    What about the MAX_PMBPS in resource_io_calibrate$? I deleted the line , as recommended Oracle, and the things continued perfectly fine. But I knew that I can have the control from resource_io_calibrate$ in case there is a need.


    Regarding the consolidation, the listener configuration comes in to play. Normally, Oracle recommends having EBS listeners running from the GRID Home. However; what if I will have several EBS database in Exadata right?  Several EBS databases in the same cluster, I mean. Will I give lots of IFILE pointers from the the tnsnames,sqlnet and listener files stored in the GRID Home to the different TNS_ADMIN directories of different EBS database homes? This seems dirty right?

    Here , the following approach comes to our help;

    What we do is;
    We create the EBS local listeners from the oracle OS user in the EBS Oracle Homes.
    We execute srvctl by the oracle OS user to do that. (this is important)
    Then we set the TNS_ADMIN environment variables for these listeners and their associated databases using srvctl again. We set TNS_ADMIN to the $ORACLE_HOME/network/admin directory which is associated with the related listeners.
    We give IFILE pointer from those default network admin to the EBS TNS_ADMIN and we are ready to go.
    This approach isolates the listeners and the tnsnames and sqlnet files.
    It is hard to explain but the following output will give you a better picture about what I m talking about;

    Here is what it looks like when I configured them;
    Note: Please , concantrate on the listener named LISTENER_PROD ;

    [root@exadb01 ~]# ps -ef|grep inh
    oracle   102244      1  0 Sep11 ?        00:00:00 /u01/app/oracle/product/ LISTENER_TEST70 -no_crs_notify -inherit
    oracle   160117      1  0 Sep11 ?        00:02:20 /u01/app/oracle/product/ LISTENER_PROD -no_crs_notify -inherit
    grid     160740      1  0 Sep11 ?        00:01:35 /u01/app/ LISTENER_SCAN2 -no_crs_notify -inherit
    grid     160977      1  0 Sep11 ?        00:01:19 /u01/app/ LISTENER_SCAN3 -no_crs_notify -inherit
    root     282658 280422  0 13:03 pts/1    00:00:00 grep inh
    grid     339575      1  0 Sep01 ?        00:00:14 /u01/app/ MGMTLSNR -no_crs_notify -inherit

    [grid@exadb01 ~]$ srvctl config listener -l LISTENER_PROD

    Type: Database Listener
    Network: 1, Owner: oracle
    Home: /u01/app/oracle/product/
    End points: TCP:1523
    Listener is enabled.
    Listener is individually enabled on nodes:
    Listener is individually disabled on nodes:

    tnsnames.ora: (also, sqlnet.ora and listener.ora have these kind of IFILES)

    [grid@exadb01 ~]$ srvctl getenv database -d PROD

    [grid@exadb01 ~]$ srvctl getenv listener -l LISTENER_PROD

    So with this configuration, every EBS listener will be running its own Oracle Home and every EBS listener will see only its own tnsnames,sqlnet and listener files.

    One last thing; (it is important)
    If you see your EBS database is not registering itself with the scan listener, check your tnsnames.ora stored in the EBS TNS_ADMIN directory. If your EBS is an 11i, you will probably see a TNS name entry in the form of the scan name and port (scan_name:port)... You know what... That entry is not correct. It should not be there. It is created by autoconfig, but it is a bug. So delete that record and use alter system register to make your database register itself with the scan listener. You will see that it will work.

    Well, we have reached the end of this blog post. It will be a complicated one I know, but still I find it quite useful. I hope you have enjoyed reading.

    Saturday, September 10, 2016

    RDBMS - Active data center from Oracle perspective

    Yesterday a question asked to be, actually it was a request to give na Oracle solution for a Active-Active datacenter deployment.
    The answer that I gave was via the Oracle presentation, which is available through the following link: ("Deploying Active-Active Data Centers Using Oracle Database Solutions")

    So, basically, there are 4 options we offer for having an active active database deployment.
    It actually depends on what we understand by the word Active-Active
    That is If we want read+write on DR site,
    There are 3 options;

    1) Rac Extended Clusters: It is applicable when the distance between the sites is not more than 25 km. This is a RAC configuration in which the nodes can be in different sites.
    2) Oracle Streams : This is like a replication, but a double-sided one. No distance limit.
    3) Goldengate: Primary and DR are both read-write.  No distance limit.

    But, if we want to have a sync copy database and want to use that sync copy database for  only the data extract, reporting and heavy sql queries, in other words, if we want only read; than the option is Active DataGuard.

    4) Active Dataguard: After performing the network requirements , the latency between  sites can be decreased to a minimum using a sync trasnport+ real time apply configuration.
    However, there won't be any writes taking place in DR, even if the Active Dataguard is implemented.
    This is actually for offloading the reports, sql queries and data extract jobs to DR. In case of a disaster, the swiching is almost transparent to the Applications. Not: Active Dataguard requires license.

    What about the applications? Well it depends on the applications. Normally, rsync or any storage level technology is enough for having a sync copy of applications. Any configuration that can be done in the application layer can be implemented as well to support a transparent Disaster recovery solution. However, if an active-active data center deployment that needs to be done, the application layer should be analyzed and certified for the active-active data center deployment in place.  This is not an issue when you use Rac Extended Clusters, but when you use golden gate or streams for having active-active datacenter deployment, then the applications should be analyzed and tested accordingly. (even there can be some custom solutions needed there.)

    Thursday, September 8, 2016

    DR -- RDBMS // recommended/adequate DR Configuration Diagram --an overview

    Here is a diagram to summarize what kind of configuration is recommended for having a Production Oracle Database environment with an adequate DR configuration.
    The DR solution, which is a 2 tiered one (a Local DR and Remote DR) shown in below figure is completely dependent on the use of Oracle Dataguard.
    I recommend Maximum Availability mode for Local DR/Standby and Maximum Performance Mode for the Remote DR. (as already shown in the figure below -- in the text box)
    The figure is self explanatory but if you have any questions, I can happily answer.

    You can find more info about the Oracle DR and Standby configuration in this blog...

    Some of my blog post regarding the same subject:

    EBS -- RAC/ASM aware auclondb.sql

    auclondb.sql is something we use in EBS migrations.  I have used it in many EBS on Exadata implementation, during the migration phase.
    It is basically the to create script (aucrdb.sql) that creates a database with tablespaces and file structures similar to the database against which the script is run.
    However it is not ASM aware. That is  the auclondb.sql creates the database creation script perfectly, but it names the datafiles without considering ASM.
    As you recall, when we create a datafile in ASM environments (like RAC /ASM and Exadata), we just name them as the name of the diskgroup (for ex: +DATA) and leave the Oracle to store them in the related ASM directories with the related ASM filenames.
    The auclondb.sql however, creates the tablespace creation scripting using a format like Diskgroup/File_name.dbf. So , altough this is not a big problem, it is not a good practice in my opinion.
    This is because, when we use the script (aucrdb.sql) created by the auclondb.sql to create our target database on ASM, the aucrdb.sql instructs oracle to create a file with a filename. So ASM does not like this, it is not designed for this. So, what ASM does for making aucrdb.sql happy, is to create the ASM aliases for the datafile names provided by aucrdb.sql and but still uses its own file naming in the backend.
    This ends up with having lots of ASM aliases on ASM filesytem and it just looks dirty (consider there are lots of DBs created in this way). Also, getting rid of these aliases is another problem, as they are recorded in the controlfiles, so controlfiles must be recreated after deleting these aliases.

    So, what I recommend is to modify the auclondb.sql (which is not supported, but I don't see any harm in this, as long as it is done appropriately) to give the datafile names according to the ASM.
    I actually won't share you the modified auclondb.sql, but I did this and it works.
    With just a little modification, auclondb.sql can be changed in a way to create the database creating script aligned with the ASM. That is , it can be modified to create the datafiles using only the diskgroup name(For ex: +DATAC1), thus giving the file naming and directory pathing to ASM . 

    This is just a hint. Keep that in mind :)

    EBS R12/11i - After Disabling SSL on Apps Tier, " SSL handshake failed: SSLProtocolErr" in Workflow Mailer logs

    A new customer reported a quite new problem :)
    The situation was a little strange. The Workflow Notification Mailer was trying to do some https/SSL work altough it was never implemented for it.

    The issue have started after disabling the SSL/HTTPS on EBS apps tier.
    After this change EBS url 's have changed back to http, but the mailer strangely tried to do some ssl works and encountered error, as we have seen the following line in workflow mailer's log;

    getFormattedMessages()]:Problem getting the HTML content ->$FormatterSAXException: Problem obtaining the HTML content -> Unable to invoke method HTTPClient.HTTPConnection.Get caused by: SSL handshake failed: SSLProtocolErr

    So , it was obvious that Workflow mailer was trying to use some EBS framework agents and but altough the url's of these agents were updated to be http, the mailer was still trying to reach them through https.
    Why mailer trys to reach the EBS framework agents is another subject, and I have already explained this in my earlier posts (just search my blog with "workflow mailer" search key, and you ll find lots info regarding it.)

    Diag and thing tried:
    Anyways, restarting the apps tier, the db or the mailer itself does not solve this issue.
    All the context files in the filesystems,  the context files stored in FND_OAM_CONTEXT_FILES table and all the related profiles, which was in the form of a url, was appropriately set to http.
    After some diagnostics, I have found that actually the new worklow emails were successfully delivered, but some of the old ones were not.
    So, the problem was in the mailer queues, I mean queues like wf_deferred, workflow_notification_out. It seemed that the EBS framework agent's url which was  with https when these problematic messages were queued, are still there in the queues.
    So the mailer was seeing this info from the queues and altough the agent's url was changed, it was still trying to reach the agents using the old saved https url.

    Anyways, this is actually expected, but not very accurately documented.
    The fix was rebuilding the queues.
    For rebuilding the queues,  the Oracle Support Document: named "How To Rebuild Mailers Queue when it is Inconsistent or Corrupted? (Doc ID 736898.1)"  is good to follow.
    The problem is also written there,

    Look ->When you changed profile option "WF: Workflow Mailer Framework Web Agent", but messages in mailer's queue still refer to the old value. "In such situations, mailer's queue must be rebuilt."

    The action plan:
    Take a backup (just in case)
    Stop the WF mailer
    Follow note 736898.1 and rebuild the queues ( you will use wfntfqup script -> sqlplus apps/<apps_pwd> @$FND_TOP/patch/115/sql/wfntfqup APPS <apps_pwd> APPLSYS)
    Start the WF Mailer.

    Friday, August 26, 2016

    RDBMS -- SYSAUX tablespace full, huge sized "WRI$_OPTSTAT_HISTHEAD_HISTORY" , DBMS_STATS.PURGE_ALL demo and shrink tablespace

    You may encounter a tablespace problem in SYSAUX tablespace.
    Actually, altough you don't have a disk size problem, you may have an unnecessarly big sized SYSAUX tablespace and you may not pay attention to.
    In this blog post, I will explain one of the most likely causes of having a big sized SYSAUX tablespace, and also try to explain how to get rid of them, by purging the related content from the Oracle Database.

    The things is that , the occupied space in SYSAUX tablespace is mostly caused by optimizer history statistics tables.

    Here is an example output produced, when we query and sort the objects residing in the SYSAUX tablespace by their sizes;

    TABLESPACE_NAME              SEGMENT_NAME                                     MB

    SYSAUX                         WRI$_OPTSTAT_HISTHEAD_HISTORY            47,264
    SYSAUX                         I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST              42,150
    SYSAUX                         I_WRI$_OPTSTAT_HH_ST                                22,497

    As seen, in the first place there is a object statistics history related table and the follower are the related indexes.
    The sizes are huge (considering this is an Oracle EBS database sized almost 1 TB and have only few customizations)
    Also, altough this size of statistics history can be created because of frequent statistics collection, it is not acceptable.
    In fact, MMON process should purge these statistics accordint to the default retention, which is 31 days but it seem it can not. MMON performs these purge activities automatically, but it has a limit of 5 minutes to perform these activities. So if the purging takes more than 5 mins, then the activities are aborted and as a result the stats are not purged.

    There are several bugs records about this situation already:

    Document 10279045.8 Slow Statistics purging (SYSAUX grows)
    Document 8553944.8 SYSAUX tablespace grows
    Document 14373728.8 Bug 14373728 - Old statistics not purged from SYSAUX tablespace
    Document 11869207.8 Improvements to archived statistics purging / SYSAUX tablespace grows

    The similar issue is adressed in Oracle Support document :"SYSAUX Grows Because Optimizer Stats History is Not Purged (Doc ID 1055547.1)"

    There are 3 things to do for getting rid of this size, actually to purge the optimizer stats history records.

    1) Applying patch 14373728, which superseded 11869207 'i supersede ediyor. The patch 11869207 was faulty anyways. The new patch:14373728, which is an OPTIMIZER overlay patch, can be applied.. This patch is for MMON, the automatic purge.

    2) If we can't apply the patch  14373728, we can go ahead and purge the optimizer statistics manually;

    for i in reverse 10..100
    end loop;

    This takes time? Then let's see our 3rd solution alternative;

    If the patch 1027045 (the related option comes with this patch) is already applied , we can go ahead and use DBMS_STATS.PURGE_ALL to purge all our historical objects statistics. DBMS_STATS.PURGE_ALL just truncates the related table, so all purge activity takes only 5-10 seconds to finish.

    The tradeoff of this type of purging can be the inability to make forensic sql performance analysis and corrections accordingly, but in most of the customer environments these operations are done quire rare right? :)

    Following is a demo of running DBMS_STATS with PURGE_ALL argument;
    Note that, we also change the retention period from the default(31 days) to 15 days.

    Initially, the tables are fully loaded. The SYSAUX tablespace is 41 gigs(which is used almost fully) and the WRI$_OPTSTAT_HISTHEAD_HISTORY s almost 14 gigs as seen below;

    SQL> set linesize 1000;
    SQL> select dbms_stats.get_stats_history_retention from dual;


    SQL> select dbms_stats.get_stats_history_availability from dual;

    23-JUL-16 PM +03:00

    SQL> select * from (select owner, segment_name, bytes/(1024*1024) from dba_segments where owner='SYS' order by bytes desc) where rownum <10;

    OWNER                 SEGMENT_NAME                                                                 BYTES/(1024*1024)
    ------------------------------ ---------------------------------------------------------------------------------          -----------------
    SYS                      WRI$_OPTSTAT_HISTHEAD_HISTORY                                       14863
    SYS                       I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST                                         11179
    SYS                       I_WRI$_OPTSTAT_HH_ST                                                          7074
    SYS                       SOURCE$                                                                                  5481.64063
    SYS                       I_SOURCE1                                                                               4280.03125
    SYS                       IDL_UB1$                                                                                   2931.52344
    SYS                       I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST                                        2224
    SYS                       WRI$_OPTSTAT_HISTGRM_HISTORY                                        1473
    SYS                       IDL_UB2$                                                                                   1087.52344

    SQL> select nvl(df.tablespace_name, nvl(fs.tablespace_name,'UNKOWN')) name,
      2         to_char(kbytes_alloc,'999,999,990') kbytes,
      3         to_char(kbytes_alloc-nvl(kbytes_free,0),'999,999,990') used,
      4         to_char(nvl(kbytes_free,0),'999,999,990') free,
      5         to_char(((kbytes_alloc-nvl(kbytes_free,0))/kbytes_alloc)*100,'990.9999') pct_used,
      6         to_char(nvl(largest,0),'999,999,990') largest
      7  from ( select sum(bytes)/1024 Kbytes_free,
      8                max(bytes)/1024 largest,
      9                tablespace_name
     10         from   dba_free_space
     11         where  tablespace_name='SYSAUX'
     12         group by tablespace_name ) df,
     13       ( select sum(bytes)/1024 Kbytes_alloc,
     14                tablespace_name
     15         from   dba_data_files
           where  tablespace_name='SYSAUX'
     16   17         group by tablespace_name ) fs;

    NAME                           KBYTES       USED            FREE         PCT_USED  LARGEST
    ------------------------------ ------------ ------------ ------------ --------- ------------
    SYSAUX                           41,943,040   41,878,016       65,024   99.8450          960



    PL/SQL procedure successfully completed.

    SQL> select dbms_stats.get_stats_history_retention from dual;


    SQL> select dbms_stats.get_stats_history_availability from dual;

    24-AUG-16 AM +03:00

    SQL> select * from (select owner, segment_name, bytes/(1024*1024) from dba_segments where owner='SYS' order by bytes desc) where rownum <10;

    OWNER                          SEGMENT_NAME                                                          BYTES/(1024*1024)
    ------------------------------ ---------------------------------------------------------------------------------               -----------------
    SYS                            SOURCE$                                                                                 5481.64063
    SYS                            I_SOURCE1                                                                              4280.03125
    SYS                            IDL_UB1$                                                                                 2931.52344
    SYS                            IDL_UB2$                                                                                 1087.52344
    SYS                            ARGUMENT$                                                                            1011.33594
    SYS                            _SYSSMU9_2885769297$                                                          849.125
    SYS                            _SYSSMU2_735814084$                                                            839.125
    SYS                            _SYSSMU4_179149818$                                                            839.125
    SYS                            _SYSSMU8_751394697$                                                            836.125

    9 rows selected.

    SQL> select nvl(df.tablespace_name, nvl(fs.tablespace_name,'UNKOWN')) name,
      2         to_char(kbytes_alloc,'999,999,990') kbytes,
      3         to_char(kbytes_alloc-nvl(kbytes_free,0),'999,999,990') used,
      4         to_char(nvl(kbytes_free,0),'999,999,990') free,
      5         to_char(((kbytes_alloc-nvl(kbytes_free,0))/kbytes_alloc)*100,'990.9999') pct_used,
      6         to_char(nvl(largest,0),'999,999,990') largest
      7  from ( select sum(bytes)/1024 Kbytes_free,
      8                max(bytes)/1024 largest,
      9                tablespace_name
           from   dba_free_space
     10   11         where  tablespace_name='SYSAUX'
     12         group by tablespace_name ) df,
     13       ( select sum(bytes)/1024 Kbytes_alloc,
     14                tablespace_name
     15         from   dba_data_files
     16         where  tablespace_name='SYSAUX'
     17         group by tablespace_name ) fs;

    NAME                           KBYTES       USED         FREE         PCT_USED  LARGEST
    ------------------------------ ------------ ------------ ------------ --------- ------------
    SYSAUX                           41,943,040    1,944,960   39,998,080    4.6371    1,434,624

    "After the purge we  have 39G free ..."

    Now, we change the retention-->

    SQL> exec dbms_stats.alter_stats_history_retention(15);

    PL/SQL procedure successfully completed.

    SQL> select dbms_stats.get_stats_history_retention from dual;


    "After the purge we  have 3.5 MB for optimizer statistics history related objects..."

    SQL> select occupant_desc, space_usage_kbytes from v$sysaux_occupants where OCCUPANT_DESC like '%Statistics%';

    OCCUPANT_DESC                                                    SPACE_USAGE_KBYTES
    ---------------------------------------------------------------- ------------------
    Server Manageability - Optimizer Statistics History                            3584

    So, with this demo, we have seen how to empty the SYSAUX tablespace by purging the optimizer statistics history table, so what 's next; I say you what... Now we need to resize our SYSAUX tablespace by giving back the newly emerged free space.
    In order to do this; we need to shrink or move the tables and makes sure the continous empty space is configured to be in the end of the datafiles of the SYSAUX tablespace..

    So in order to accomplish this, we can follow use the Tanel Poder ' s script. This script will check the current situation of the datafiles and generate the recuired alter operation statements for resizing the database ..

    with query as ( select /*+ NO_MERGE MATERIALIZE */ file_id, tablespace_name, max(block_id + blocks) highblock from dba_extents group by file_id, tablespace_name ) select 'alter database datafile '|| q.file_id || ' resize ' || ceil ((q.highblock * t.block_size + t.block_size)/1024) || 'K;' cmd from query q, dba_tablespaces t where q.tablespace_name = t.tablespace_name;

    But if the free space is not in the end of the datafiles, this script is useless . So in that case, we need to reorganize the objects residing the SYSAUX tablespace to make the free space to be located in the end of the SYSAUX datafiles. In order to do this, we can follow the article published Jonathan Levis ->  Ref:

    That is it for this topic. I hope it will help you.

    Friday, August 19, 2016

    RAC/Exadata -- Scan listener registration problem, EBS 11i , autoconfig TNS-12514

    You may encounter a scan listener registration problem after enabling autoconfig on your EBS database tier. Altough , everything including remote_listener, local_listener parameters as well as network related oracle conf file such as tnsnames.ora,sqlnet.ora and listener.ora may seem perfect, the dynamic registration from EBS database to the scan listener(pointed by the remote_listener parameter) can be problematic.
    This problem is encountered in an Exadata X6 environment, on which we hosted an EBS 11i environment with a 12C Oracle database.

    The problem is directly related with autoconfig, as autoconfig in some cases can not handle RAC specific configuration perfectly well.
    In this case, autoconfig creates a tns entry named as the scan_name:port  and actually creates it wrongly by putting the vips as for the host entry. Like below;

    (ADDRESS = (PROTOCOL = TCP)(HOST = exa1vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = exa2vip)(PORT = 1521))

    The result is ORA-12514 / TNS-12514 when trying to connect to scan name and port .

    The solution is; removing the scan_name:scan_port tns entry from the tnsnames.ora , as there should not be any tns entry for registering the scan listener, no need.
    After removing the tns entry, it is better to refill the remote_listener and make the pmon to register the database to all available listeners using alter system register; as shown in the following example;

    sqlplus / as sysdba
    alter system set remote_listener='' scope=both sid='*';
    alter system set remote_listener='exa-rac:1521' scope=both sid='*';
    alter system register;

    Also, it is important to keep in mind that this solution should be reimplemented everytime the autoconfig is run.
    An alternative maybe, modifying the autoconfig templates or codes to fix this issue, but it won't be supported and actually I dont have time for this , these days :)