Wednesday, September 28, 2016

ODA X6 -- S(small) and M(medium), NVMe & Standard Edition support & Capacity on Demand licensing

In June of 2016, Oracle announced the new generation ODA machines, ODA X6-2S and ODA X6-2M.
The ODA Machines we have dealed with so far is now called as ODA HA.
The new term describes the classic 4U ODA Machines as High Available ODA environments.
The new HA term comes in to play for ODA, as with the release of ODA X6-2, there are new ODA model introduced.
ODA X6-2 M and S are the first released models in this concept.
So with this M and S machines, ODA now have more hardware options. You can think like the small is for small environments and the medium is for the medium environments. Probably, there will be other configurations like (ODA X6-2 L and ODA X6-2 HA), but now we are focused on these two small and medium ODA X6-2 configurations as they are the first released ODA X6-2 environments at the moment.

We will walk through the introduction of ODA X6-2 M, as the ODA X6-2 S is just a small version of it. (less CPU, less memory etc.)
So, knowing ODA X6-2 M, you can also guess what the ODA X6-2 S is about.

ODA X6-2 M is a 1U machine.
It has one server for serving both the compute and storage services.
ODA X6 M has a built in 6.2 TB High Speed NVME .
It supports Oracle Database SE, SE one, SE 2 or EE . (earlier ODA environments ODA X5 and so on, could only supported Oracle Database EE -- unless they are deployed with Virutalized environment option)
ODA X6-' M can not be expanded horizontally and it can not be virtualized at the moment.

The ODA X6-2 M machine specifications are as follows;

CPU: 2x10 core (2.2 GHZ Intel Xeon E5-2630 V4)
Memory:  256 GB ( 8x32GB)
Storage: 6.4 TB(2x3.2TB) NVMe
Boot Disk : 2x480GB SSD (mirrored)
Ethernet : 4x10GBase-T
Fiber : 2x10GbE SFP+

Note: ODA X6-2 S is almost the same machine. The difference between the ODAX6-2 S and M is , that ODA X6-2 S has 1 x 10 cores and 128GB Ram installed. Also ODAX6-2 S has 2 x10GBase -T port.


This configuration can be expanded as well. As for the expansion options; we have 512GB(16x32GB) memory and 6.4TB(2x3.2TB) NVMe.

The real benefit of this machine comes with the NVM Express(NVMe) Flash Storage. It improves thdatabase scability and performance.
->NVMe is the new standard for PCI Express (PCIe) SSDs
->Architected for Flash Storage with minimal CPU overhead
->Works directly with PCIe interface
->No SCSI protocol overhead resulting in very fast response
->5x to 10x IOPS improvement over SAS based SSDs
->Low latency of 100s of micro seconds

The ODA X6-2M is optimized for the database.
As mentioned earlier, it supports Database Standard Edition, Standard Edition One and Two, as well as the Enterprise Edition.
The host OS used in this new ODA is Oracle Linux 6.7. Appliance Manager is the tool used for the management.
Both 12.1.0.2 and 11.2.0.4 RDBMS is supported for the Enterprise Edition.
As for now; 11.2.0.4 RDBMS is the only database version supported for Standard Edition and Standard Edition One.
12.1.0.2 is the only supported version for Standard Edition Two.

With this new ODA machine, the management activities are done using Appliance Manager. It has both a webconsole and command cli. The name of the cli is now ODACLI (it was OAKCLI in the past)

With the simple ODACLI commands , the following can be done ;
–Database creation
–Patching
–Management
–Support

Another important feature of this new ODA machine is capacity on demand. So there is a capacity on demand licensing for ODA X6-2M. (The idea here is License as You Grow only the needed cores and save significantly")
Capacity on demand is only for Enterprise Edition databases and the licensed core can be multiple of 2. (2,4,6,8,10,12,14,16,18,20)

The deployment of the machine is quite straight forward.
It has 7 steps and takes approximately 40 minutes;

The steps are as follows;

1.Rack, Cable and Connect the Network and Power
2.Start Up the System
3.Plumb the Network
4.Copy the Oracle Database Appliance (SIB)
5.Update the Oracle Database Appliance Image
6.Deploy the Oracle Database Appliance
7.Monitor Deployment Progress

Another feature of ODA X6-2M is the ability to use Oracle ASR. ASR(Automatic Service Request) comes builtin with this release. So, if you want to use ASR, you can just configure it using Web GUI of Appliance Manager.

Lastly, when we compare ODA X6-2 M with a traditional general purpose X86 System, we  can say that;
ODA is better becasue;
  • ODA is Simple, Optimized, Affordable
  • It is Oracle :)
  • ODA has built-in automation and best practices
  • The Hardware and software engineered, tested, and supported to work together.
  • ODA X6 has High performance NVMe Storage
  • The environment comes built in with ODA is optimized for database workloads.
  • ODA X6 has high performance and reliable OS storage 
  • Because the Hardware and Software is provided by Oracle, there is a single vendor support.
Conclusion,

The new ODA and its S and M models provides the simplicity, optimization, and affordability for the customers who want to avoid complexity and tuning requirements and also want to have an optimal database performance for their single instance databases.
Capacity on demand and the support for Standard Edition databases are the keys and the most significant power of the new ODA X6-2S and ODA X6-2M, for being the choice of the relevant customers.
It is actually good to have Engineered systems in this level. The release of these new small but optimized ODA machines seems promising and  I guess we will see more ODA deployed in the data centers in the following days. 

Friday, September 23, 2016

EBS 12.2 - cloned instanced can not show images / Connection refused: proxy: HTTP: attempt to connect to / GET /OA_MEDIA/oracle_white_logo.png HTTP/1.1" 503

After cloning an EBS 12.2 environment, which has multiple managed servers for certain services, you may face with a image rendering problem.

The problem actually can be encountered after disabling some of the managed servers in a newly clone environments. (Generally, we don't have much load in test/clone environments, so after cloning we disable some of the managed servers  in these environment which are configured to run in parallel in Production environments)

The issue that I m talking about is something like the following image rendering and layout problem, which is shown in the example screenshot below;

The issue depicted in above screenshot was on a newly cloned environment. The customer was not sure when it was started. But following were the changes that are done in this environment;

-disabled the SSL.
-some of the oacore servers in this newly cloned environment.
-applied ATG TXK and AD Delta patches (using online patching cycle)

SSL may be the problem as some of the profiles like APPS_FRAMEWORK_AGENT was still pointing to the https url in the site level.
So I disabled the SSL once again, properly, but the issue have continued.

After analyzing the Oracle HTTP Server logs; these were the findings;

HTTP Server log : [2016-09-22T19:06:33.0646+03:00] [OHS] [ERROR:32] [OHS-9999] [core.c][host_id: servername_deleted_for_security_reasons] [host_addr: 10.111.8.3] [tid: 139926295234304] [user: applmgr] [ecid: 005FHfORQun7q2w5OF0Fyd0006jJ000005] [rid: 0] [VirtualHost: main] (111)Connection refused: proxy: HTTP: attempt to connect to 10.111.8.3:7201 (servername_deleted_for_security_reasonsl) failed

HTTP Server access log: 10.20.10.21 - - [22/Sep/2016:19:06:33 +0300] "GET /OA_MEDIA/oracle_white_logo.png HTTP/1.1" 503 306

So, the HTTP server was trying to reach the port 7201 ( which was the current patch edition port). So there should be a reference left , as the oacore server currently was listening on 7202.
This could be caused by a problematic managed server remove operation or a problem in the cutover phase which was done for applying the AD and TXK delta patches.

The fix that I applied was for deleting the unnecessary managed server references for oacore and it worked.

The solution: perl $FND_TOP/patch/115/bin/txkSetAppsConf.pl -contextfile=$CONTEXT_FILE -configoption=removeMS -oacore=<host_name>:7201

EBS // After migrating EBS database to RAC, MTL_ONLINE_TRANSACTION_PUB.process_online returns false // no manager

You may encounter weird erros on workflow and concurrent programs like interface trip stop, after migrating EBS database to a RAC environment running on traditional servers or on Exadata.
These problems are encountered when you enabled  load balancing database connections for Concurrent Managers (s_cp_twotask) -- not related with Parallel Concurrent Processing.

These types of problems are due having application tier processing which need to be on a single database node, to span on multiple database nodes.
Thanks to EBS, we have profile("Concurrent:TM Transport Type") to workaround this situation, as per explained in one of my previous blog post,  which was about an error in "Wf_engine_Util.Function_Call".

Same fix, still applies with the Interface Trip Stop.
As for the solution, We set Concurrent:TM Transport Type to "QUEUE" ,as normally Transaction Manager uses DBMS PIPEs (When, Concurrent TM Transport Type is set to "PIPE") to communicate with with the Service Manager, and any user process initiated by Forms, or Standard Manager requests. DBMS_PIPE is by design requires the sessions which there is a need for establising a communication between eachother , to be in the same instance.As, DBMS_PIPE is a package that lets the sessions in the same instance to communicate using Oracle Pipes, using them in a RAC environment is not acceptable. That's why we set Concurrent:TM Transport Type to "QUEUE" in RAC environments unless we have multiple application nodes which have concurrent manager services dedicated to their own database instances.

Anyways, lets give the problem and the solution;
The related error in interface trip stop is as follows;

InterfaceTripStop: processing stop_id 4656255 for INV OM DSNO
no. of OE messages :
MTL_ONLINE_TRANSACTION_PUB.process_online returns false
Error Code:İşlem işlemcisi hatası
Error Explanation:Bu talebi işlemek üzere bir eşzamanlı yönetici tanımlanmadığından talep işlenemiyor.

This error message is Turkish :), but it just means not concurrent manager and blabla...

Action Plan:
  1. Stop Concurrent managers and workflow service components
  2. set the Concurrent:TM Transport Type to "QUEUE", http://ermanarslan.blogspot.fr/2015/12/ebsracexadata-concurrenttm-transport.html.
  3. Restart apps tier + db (if possible)
It is also good to recreate the Concurrent Manager views using  FNDLIBR "FND" "FNDCPBWV" apps/<passwd> " SYSADMIN" "System Administrator" "SYSADMIN" (just in case)

    Tuesday, September 20, 2016

    EBS 12.2.6 released!

    The public announcement was done on Sep 15, 2016 and EBS 12.2.6 is released!


    You can reach the readme of it through the  Oracle Support document: Oracle E-Business Suite Release 12.2.6 Readme (Doc ID 2114016.1)

    Like the 12.2.5 , EBS 12.2.6 is an online patch (Patch 21900901), however it is also installable using the adop's downtime option (apply_mode=downtime)
    If you are already using EBS 12.2.2,12.2.3,12.2.4 or 12.2.5, then you apply the patch online by executing online patching cycle. On the other hand, if you are using an old version of EBS (EBS 11i, 12.0 or 12.1) or if you are doing a new EBS 12.2 install, then you are good to go with the downtime option.

    Note that, EBS 12.2.6 requires the EBS database to be at least an 11.2.0.4.
    Note that, Oracle E-Business Suite 12.2.6 Release Update Pack requires Fusion Middleware Technology Stack (FMW) 11.1.1.7 (11gR1 PS6) or higher.

    As for the upgrades;
    EBS 11i customers should first upgrade to 12.2 before applying 12.2.6.
    EBS 12.0 and 12.1 customers should first upgrade to 12.2 before applying 12.2.6.
    EBS 12.2 customers can directly apply 12.2.6.

    Lastly, it is important to mention that, EBS 12.2.6 brings the new Functional Capabilities Across Oracle E-Business Suite, Modern User Experience and mobility and Operation Efficiency.
    check out the official announcement and highlights video to get an overview about the new things in EBS 12.2.6;

    official announcement: http://www.oracle.com/us/products/applications/ebs-ga-2016-09-15-3219054.pdf
    highlights video: http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=904&get_params=cloudId:243,objectId:14783

    Monday, September 12, 2016

    EXADATA -- EBS on EXADATA, migration, 2 tips about parallelization and the listeners + 1 fix for the scan listener registration problem

    Exadata is poweful , it is said to be the World's fastest database machine and it is stable.
    However, it is an like instrument (Oracle database is also like an instrument) and you need to know how to play it.

    I have been doing Exadata migrations since 2012, and I was the one who first implemented EBS R12 on Exadata in Turkey. Since then, I hve migrated several EBS environment to Exadata X2, X3, X4 and X5. The migrations have become my routine. Altough I have a team, consisting of 5 people, I have always wanted to do these migrations by myself, as it was an always ivolving routine and in any new migration, there was something to learn and some optimizations to be done.
    So here I am and want yo to give 2 tips regarding EBS on Exadata implementation.
    Recently migrated an EBS 11i to Exadata X6. The EBS in question was a critical production system, which have lots of consumers around.
    The POC and TEST was perfectly done, no issues were there at all.
    Everything was running as expected and everything was configured as requested by the customer.
    However, during the PROD migration, the customer wanted 2 new things.
    1) They wanted to have a more optimized parallelization configuration in the EBS PROD database which reside on Exadata
    2) They said that , they want Exadata to be a consolidated environment , where they will host lots of EBS databases including Test, Dev, Uat and more.

    PARALLELIZATION:

    So, regarding the parallelization, I have implemented auto dop. The Exadata was an X6 1/8 and the RDBMS software on it was Oracle Database 12c. So it was the time to trust Oracle a little bit so , I have implemented auto degree policy and let Oracle decide the parallel degrees. Still, I needed to limit the parallelism as I didn't want the server resource to be consumed suddenly.

    So, I configured the parallel parameters for this job as follows;
    --Remember it is an Exadata X6 1/8 and these setting may be changed according to your environments and needs.

    SQL> show parameter parallel

    parallel_degree_limit                string      CPU
    parallel_degree_policy               string      AUTO
    parallel_execution_message_size      integer     16384
    parallel_force_local                 boolean     TRUE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean     FALSE
    parallel_max_servers                 integer     480
    parallel_min_percent                 integer     0
    parallel_min_servers                 integer     32
    parallel_min_time_threshold          string      AUTO
    parallel_server                      boolean     TRUE
    parallel_server_instances            integer     2
    parallel_servers_target              integer     320
    parallel_threads_per_cpu             integer     1


    So , in brief what I have configured and say Oracle  is,
    you can have maximum 480 parallel servers. You should not allocate lots of parallel servers for a single query and you should decide the parallel degrees automatically(auto dop). Also,  you should run the parallel statements in the same node.(dont scatter the parallel servers for a query accross nodes-- this is an EBS requirement),
    Also, I say; you activate parallel statement queuing feature when parallel server count become 320.
    So what I actually say is , if 320 of your parallel servers are allocated, and if a new query requires 200 parallel servers, then you queue it till the 200 parallel servers are available. (320+200=520 exceeds the parallel_max_servers)

    Let's explain how auto dop works a little bit, as it is the most important thing in this parallelization configuration;

    When Oracle estimates the time for a query to be less than the PARALLEL_MIN_TIME_THRESHOLD, it runs the query serially.
    If the estimated time is higher than the PARALEL_MIN_TIME_THRESHOLD, then the Oracle (12c) looks to CPU and IO costs and decide the parallel degree accordingly.

    At this time, to prevent a single query to allocate lots of parallel processes, we set the PARALLEL_DEGREE_LIMIT.

    PARALLEL_DEGREE_LIMIT by default is set to CPU. This setting means; "maximum DOP is limited by the DEFAULT DOP"

    DEFAULT DOP is calculated as follows;
    PARALLEL_THREADS_PER_CPU * SUM(CPU_COUNT across all cluster nodes) . For ex: 1*44=44

    To get te optimized value, Oracle uses this ACTUAL DOP = MIN(IDEAL DOP, PARALLEL_DEGREE_LIMIT)

    So when Oracle decides the IDEAL DOP for AUTO DOP, it uses the formula and method above, but it also uses the values stored in resource_io_calibrate$ .

    Higher value for the MAX_PMBPS (maximum megabytes per second) in this table -> lower value for IDEAL DOP.

    For example: the following delete-insert makes Oracle to decide a lower IDEAL DOP.
    Note: wer se Max_PMBPS as 200 MBs. Because of this the IDEAL OP will be lower  and as ACTUAL DOP=  MIN(IDEAL DOP, PARALLEL_DEGREE_LIMIT), Oracle will decide on low Parallel degrees. 

    delete from resource_io_calibrate$;
    insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
    commit;

    However, if we set the MAX_PMBS to a lower value, like 20 MB shown in below example, Oracle will decided a higher IDEAL DOP.

    delete from resource_io_calibrate$;
    insert into resource_io_calibrate$
    values(current_timestamp, current_timestamp, 0, 0, 20, 0, 0);
    commit;

    Well, the parameters that I gave above can be changed according to your environment. I only write this blog post for acknowledgement. But you get the idea right?
    The idea is ; use Auto dop, set the parallel parameters in an optimized way and limit the unnecessary parallel server allocation.
    What about the MAX_PMBPS in resource_io_calibrate$? I deleted the line , as recommended Oracle, and the things continued perfectly fine. But I knew that I can have the control from resource_io_calibrate$ in case there is a need.

    LISTENERS:

    Regarding the consolidation, the listener configuration comes in to play. Normally, Oracle recommends having EBS listeners running from the GRID Home. However; what if I will have several EBS database in Exadata right?  Several EBS databases in the same cluster, I mean. Will I give lots of IFILE pointers from the the tnsnames,sqlnet and listener files stored in the GRID Home to the different TNS_ADMIN directories of different EBS database homes? This seems dirty right?

    Here , the following approach comes to our help;

    What we do is;
    We create the EBS local listeners from the oracle OS user in the EBS Oracle Homes.
    We execute srvctl by the oracle OS user to do that. (this is important)
    Then we set the TNS_ADMIN environment variables for these listeners and their associated databases using srvctl again. We set TNS_ADMIN to the $ORACLE_HOME/network/admin directory which is associated with the related listeners.
    We give IFILE pointer from those default network admin to the EBS TNS_ADMIN and we are ready to go.
    This approach isolates the listeners and the tnsnames and sqlnet files.
    It is hard to explain but the following output will give you a better picture about what I m talking about;

    Here is what it looks like when I configured them;
    Note: Please , concantrate on the listener named LISTENER_PROD ;

    [root@exadb01 ~]# ps -ef|grep inh
    oracle   102244      1  0 Sep11 ?        00:00:00 /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/tnslsnr LISTENER_TEST70 -no_crs_notify -inherit
    oracle   160117      1  0 Sep11 ?        00:02:20 /u01/app/oracle/product/12.1.0.2/dbhome_prod/bin/tnslsnr LISTENER_PROD -no_crs_notify -inherit
    grid     160740      1  0 Sep11 ?        00:01:35 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
    grid     160977      1  0 Sep11 ?        00:01:19 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
    root     282658 280422  0 13:03 pts/1    00:00:00 grep inh
    grid     339575      1  0 Sep01 ?        00:00:14 /u01/app/12.1.0.2/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit

    [grid@exadb01 ~]$ srvctl config listener -l LISTENER_PROD

    Name: LISTENER_PROD
    Type: Database Listener
    Network: 1, Owner: oracle
    Home: /u01/app/oracle/product/12.1.0.2/dbhome_prod
    End points: TCP:1523
    Listener is enabled.
    Listener is individually enabled on nodes:
    Listener is individually disabled on nodes:

    tnsnames.ora: (also, sqlnet.ora and listener.ora have these kind of IFILES)
    IFILE=/u01/app/oracle/product/12.1.0.2/dbhome_prod/network/admin/PROD1_exadb01/tnsnames.ora

    [grid@exadb01 ~]$ srvctl getenv database -d PROD
    PROD:
    ORA_NLS10=/u01/app/oracle/product/12.1.0.2/dbhome_prod/nls/data/9idata
    TNS_ADMIN=/u01/app/oracle/product/12.1.0.2/dbhome_prod/network/admin

    [grid@exadb01 ~]$ srvctl getenv listener -l LISTENER_PROD
    LISTENER_PROD:
    TNS_ADMIN=/u01/app/oracle/product/12.1.0.2/dbhome_prod/network/admin

    So with this configuration, every EBS listener will be running its own Oracle Home and every EBS listener will see only its own tnsnames,sqlnet and listener files.

    One last thing; (it is important)
    If you see your EBS database is not registering itself with the scan listener, check your tnsnames.ora stored in the EBS TNS_ADMIN directory. If your EBS is an 11i, you will probably see a TNS name entry in the form of the scan name and port (scan_name:port)... You know what... That entry is not correct. It should not be there. It is created by autoconfig, but it is a bug. So delete that record and use alter system register to make your database register itself with the scan listener. You will see that it will work.

    Well, we have reached the end of this blog post. It will be a complicated one I know, but still I find it quite useful. I hope you have enjoyed reading.

    Saturday, September 10, 2016

    RDBMS - Active-Active Data Center from Oracle perspective

    Yesterday a question asked to be, actually it was a request to give na Oracle solution for a Active-Active datacenter deployment.
    The answer that I gave was via the Oracle presentation, which is available through the following link: http://www.oracle.com/us/products/database/300460-132393.pdf ("Deploying Active-Active Data Centers Using Oracle Database Solutions")

    So, basically, there are 4 options we offer for having an active active database deployment.
    It actually depends on what we understand by the word Active-Active
    That is If we want read+write on DR site,
    There are 3 options;

    1) Rac Extended Clusters: It is applicable when the distance between the sites is not more than 25 km. This is a RAC configuration in which the nodes can be in different sites.
    2) Oracle Streams : This is like a replication, but a double-sided one. No distance limit.
    3) Goldengate: Primary and DR are both read-write.  No distance limit.

    But, if we want to have a sync copy database and want to use that sync copy database for  only the data extract, reporting and heavy sql queries, in other words, if we want only read; than the option is Active DataGuard.

    4) Active Dataguard: After performing the network requirements , the latency between  sites can be decreased to a minimum using a sync trasnport+ real time apply configuration.
    However, there won't be any writes taking place in DR, even if the Active Dataguard is implemented.
    This is actually for offloading the reports, sql queries and data extract jobs to DR. In case of a disaster, the swiching is almost transparent to the Applications. Not: Active Dataguard requires license.

    What about the applications? Well it depends on the applications. Normally, rsync or any storage level technology is enough for having a sync copy of applications. Any configuration that can be done in the application layer can be implemented as well to support a transparent Disaster recovery solution. However, if an active-active data center deployment that needs to be done, the application layer should be analyzed and certified for the active-active data center deployment in place.  This is not an issue when you use Rac Extended Clusters, but when you use golden gate or streams for having active-active datacenter deployment, then the applications should be analyzed and tested accordingly. (even there can be some custom solutions needed there.)

    Thursday, September 8, 2016

    DR -- RDBMS // recommended/adequate DR Configuration Diagram --an overview

    Here is a diagram to summarize what kind of configuration is recommended for having a Production Oracle Database environment with an adequate DR configuration.
    The DR solution, which is a 2 tiered one (a Local DR and Remote DR) shown in below figure is completely dependent on the use of Oracle Dataguard.
    I recommend Maximum Availability mode for Local DR/Standby and Maximum Performance Mode for the Remote DR. (as already shown in the figure below -- in the text box)
    The figure is self explanatory but if you have any questions, I can happily answer.



    You can find more info about the Oracle DR and Standby configuration in this blog...

    Some of my blog post regarding the same subject:


    EBS -- RAC/ASM aware auclondb.sql

    auclondb.sql is something we use in EBS migrations.  I have used it in many EBS on Exadata implementation, during the migration phase.
    It is basically the to create script (aucrdb.sql) that creates a database with tablespaces and file structures similar to the database against which the script is run.
    However it is not ASM aware. That is  the auclondb.sql creates the database creation script perfectly, but it names the datafiles without considering ASM.
    As you recall, when we create a datafile in ASM environments (like RAC /ASM and Exadata), we just name them as the name of the diskgroup (for ex: +DATA) and leave the Oracle to store them in the related ASM directories with the related ASM filenames.
    The auclondb.sql however, creates the tablespace creation scripting using a format like Diskgroup/File_name.dbf. So , altough this is not a big problem, it is not a good practice in my opinion.
    This is because, when we use the script (aucrdb.sql) created by the auclondb.sql to create our target database on ASM, the aucrdb.sql instructs oracle to create a file with a filename. So ASM does not like this, it is not designed for this. So, what ASM does for making aucrdb.sql happy, is to create the ASM aliases for the datafile names provided by aucrdb.sql and but still uses its own file naming in the backend.
    This ends up with having lots of ASM aliases on ASM filesytem and it just looks dirty (consider there are lots of DBs created in this way). Also, getting rid of these aliases is another problem, as they are recorded in the controlfiles, so controlfiles must be recreated after deleting these aliases.

    So, what I recommend is to modify the auclondb.sql (which is not supported, but I don't see any harm in this, as long as it is done appropriately) to give the datafile names according to the ASM.
    I actually won't share you the modified auclondb.sql, but I did this and it works.
    With just a little modification, auclondb.sql can be changed in a way to create the database creating script aligned with the ASM. That is , it can be modified to create the datafiles using only the diskgroup name(For ex: +DATAC1), thus giving the file naming and directory pathing to ASM . 

    This is just a hint. Keep that in mind :)

    EBS R12/11i - After Disabling SSL on Apps Tier, "javax.net.ssl.SSLException: SSL handshake failed: SSLProtocolErr" in Workflow Mailer logs

    A new customer reported a quite new problem :)
    The situation was a little strange. The Workflow Notification Mailer was trying to do some https/SSL work altough it was never implemented for it.

    Issue: 
    The issue have started after disabling the SSL/HTTPS on EBS apps tier.
    After this change EBS url 's have changed back to http, but the mailer strangely tried to do some ssl works and encountered error, as we have seen the following line in workflow mailer's log;

    Log:
    getFormattedMessages()]:Problem getting the HTML content -> oracle.apps.fnd.wf.mailer.NotificationFormatter$FormatterSAXException: Problem obtaining the HTML content -> oracle.apps.fnd.wf.common.HTTPClientException: Unable to invoke method HTTPClient.HTTPConnection.Get caused by: javax.net.ssl.SSLException: SSL handshake failed: SSLProtocolErr

    Problem: 
    So , it was obvious that Workflow mailer was trying to use some EBS framework agents and but altough the url's of these agents were updated to be http, the mailer was still trying to reach them through https.
    Why mailer trys to reach the EBS framework agents is another subject, and I have already explained this in my earlier posts (just search my blog with "workflow mailer" search key, and you ll find lots info regarding it.)

    Diag and thing tried:
    Anyways, restarting the apps tier, the db or the mailer itself does not solve this issue.
    All the context files in the filesystems,  the context files stored in FND_OAM_CONTEXT_FILES table and all the related profiles, which was in the form of a url, was appropriately set to http.
    After some diagnostics, I have found that actually the new worklow emails were successfully delivered, but some of the old ones were not.
    So, the problem was in the mailer queues, I mean queues like wf_deferred, workflow_notification_out. It seemed that the EBS framework agent's url which was  with https when these problematic messages were queued, are still there in the queues.
    So the mailer was seeing this info from the queues and altough the agent's url was changed, it was still trying to reach the agents using the old saved https url.

    Anyways, this is actually expected, but not very accurately documented.
    The fix was rebuilding the queues.
    For rebuilding the queues,  the Oracle Support Document: named "How To Rebuild Mailers Queue when it is Inconsistent or Corrupted? (Doc ID 736898.1)"  is good to follow.
    The problem is also written there,

    Look ->When you changed profile option "WF: Workflow Mailer Framework Web Agent", but messages in mailer's queue still refer to the old value. "In such situations, mailer's queue must be rebuilt."

    The action plan:
    Take a backup (just in case)
    Stop the WF mailer
    Follow note 736898.1 and rebuild the queues ( you will use wfntfqup script -> sqlplus apps/<apps_pwd> @$FND_TOP/patch/115/sql/wfntfqup APPS <apps_pwd> APPLSYS)
    Start the WF Mailer.