Friday, September 29, 2017

FMW -- Starting/ Stopping a 2 Node Forms&Reports 12C Cluster with a single command. SCRIPTS.. Automated start/stop for High Available FMW environments

Hello everyone,

today, I want to share 2 scripts that I have written for starting and stopping a 2 Node Forms&Reports 12C Cluster in one-go. This scripts makes our lifes easy, as they provide an automated way of controlling FMW 12C cluster components.

Using these scripts, the admin can start all the services running across 2 nodes with a single command by connecting to a single node.

Using these scripts, an admin can connect to a node (primary node) and start/stop all the Forms&Reports services (including OHS instances, managed server ,admin server etc) with only running one simple command.

In addition to that, I wrote these scripts by taking the dependencies between the components into account. That is, if a component is dependent to another component , that dependent component is started after the component, that it depends on.

Likewise, if a component is dependent on another component, that dependent component is stopped before the component that it depends on.

Before running these script, we configure the ssh equivalency between node1 and node2. The environment that I wrote this script was Solaris and it was very easy to enable the ssh equivalency between the FMW OS users. Anyways, the same method for enabling the ssh equivalency works for Linux as well.

So, we enable ssh equiv. because the scripts connects from node1 to node2 using ssh.

Actually, one way ssh equiv is enough.. (node1 should be able to connect to node2 using ssh --without password)

In addition to that, there is also one other requirement. That is, we create a directory called /<your_mount_point>/startstop_script and put our script files there. (we create this directory both on node1 and node2)

In order to start the services , we use the script called FRMRP_START.sh

For stopping the services, we use the script called FRMRP_STOP.sh

So, we only execute "sh FRMRP_START.sh" command to start all the services. (to start the services both on node1 and node2)

Similarly, we only execute " sh FRMRP_STOP.sh" command to stop all the services. (to stop the services both on node1 and node2)

Pretty handy right? :) We just execute a script , we wait a little and our full WLS stack including High Available Forms and Reports services are started/stopped. no need to remember the nohup commands, no need to create multiple ssh connections, no need to connect to the weblogic console for starting the managed servers, no need to remember the command for starting the ohs instances and so on... no need to spend energy while starting/stopping multiple FMW components across multiple server nodes:)

In order to start all the services (across a 2 node - FMW Forms&Reports 12C cluster)
We connect to node 1 using FMW OS user
We cd to the directory where our scripts are located -> /uo1/startstop_scripts
We execute FRMRP_START.sh

In order to stop all the services (across a 2 node - FMW Forms&Reports 12C cluster)
We connect to node 1 using FMW OS user
We cd to the directory where our scripts are located -> /u01/startstop_scripts
We execute FRMRP_STOP.sh

The codes of the scripts are as follows;

--Note that, the directory paths used in these scripts should be modified according to your env..

Alternatively, the scripts can be enhanced to make use of of env variables or bash script variables rather than using direct directory paths. I was on the field and wrote these scripts there.. I could actually wrote them better to take the direct path dependencies and the ssh equivalency requirements away, but still these scripts are okay and they are already tested & used in a production environment.

Also note that, there are 3 phyton scripts that you will see below. These phyton scripts are internally executed by FRMRP_START.sh and FRMRP_STOP.sh scripts. So these phyton scripts should also be located in the script directory (/<your_mount_point>/startstop_script).

FRMRP_START.sh script:

#Set the domain env.

. /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/setDomainEnv.sh

# Starting NodeManager 1 on node1
echo Starting Node Manager 1
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startNodeManager.sh > /tmp/nohup_nodemanager.out 2>&1 &

# Starting NodeManager 2 on node2
echo Starting Node Manager 2
ssh <node2hostname> '/u01/FMWHOME/oracle_home/oracle_common/common/bin/wlst.sh /u01/startstop_scripts/startnodemgr2.py'



# Starting WebLogic Admin Server
echo Starting Admin Server
echo We just wait here for 60 secs
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startWebLogic.sh > /tmp/nohup_adminserver.out 2>&1 &
sleep 60

# Starting the managed servers on Node 1
echo Starting the managed servers on Node 1
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startManagedWebLogic.sh WLS_FORMS > /tmp/nohup_wlsforms.out 2>&1 &
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startManagedWebLogic.sh WLS_REPORTS > /tmp/nohup_wlsreports.out 2>&1 &

#Starting the managed servers on Node 2
echo Starting the managed servers on Node 2
ssh <node2hostname> 'nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startManagedWebLogic.sh WLS_FORMS1 > /tmp/nohup_wlsforms1.out 2>&1 &'
ssh <node2hostname> 'nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startManagedWebLogic.sh WLS_REPORTS1 > /tmp/nohup_wlsreports1.out 2>&1 &'

# Starting Web Tier OHS1
echo 
Starting Web Tier OHS1
/u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startComponent.sh ohs1 


Starting Web Tier OHS2
echo 
Starting Web Tier OHS2
/u01/FMWHOME/oracle_home/oracle_common/common/bin/wlst.sh /u01/startstop_scripts/startohs2.py

echo Script completed.
echo The logs are under /tmp.. nohup_* files.

Note that, FRMRP_START.sh script needs 2 additional/helper scripts in order to be able to run successfully. See below->

Helper Scripts for FRMRP_START.sh:

These scripts were written with phyton and they were written to be executed by WLST. They are for starting nodemanager and OHS instances remotely. (for starting node2's nodemanager and OHS from node1)


starthos2.py script  (Located on node1)

nmConnect('nodemanager',xxxxx,'node2.oracle.com','5556','base_domain','/u01/FMWHOME/oracle_home/user_projects/domains/base_domain','ssl');
nmStart(serverName='ohs2', serverType='OHS');
exit();


startnodemgr2.py script (Located on node2) 

startNodeManager(verbose='true',NodeManagerHome='/u01/FMWHOME/oracle_home/user_projects/domains/base_domain/nodemanager',ListenPort='5556',ListenAddress='xxxxx.node2.oracle.com')
exit()

FRMRP_STOP.sh script:

# Set the domain environment
. /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/setDomainEnv.sh

# Stopping Managed Servers on node1
echo 
Stopping Managed Servers on node1
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopManagedWebLogic.sh WLS_FORMS > /tmp/nohup_wlsforms.out 2>&1 &
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopManagedWebLogic.sh WLS_REPORTS > /tmp/nohup_wlsreports.out 2>&1 &

Stopping Managed Servers on node2
echo 
Stopping Managed Servers on node2
ssh node2hostname 'nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopManagedWebLogic.sh WLS_FORMS1 > /tmp/nohup_wlsforms1.out 2>&1 &'
ssh node2hostname 'nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopManagedWebLogic.sh WLS_REPORTS1 > /tmp/nohup_wlsreports1.out 2>&1 &'

# Stopping Web Tier OHS1
echo 
Stopping Web Tier OHS1
/u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopComponent.sh ohs1

Stopping Web Tier OHS2
echo stopping Web Tier OHS2 using WLST in foreground.
/u01/FMWHOME/oracle_home/oracle_common/common/bin/wlst.sh /u01/startstop_scripts/stopohs2.py

# Stopping Node Manager 1 on node1

echo Stopping Node Manager 1 on node1
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopNodeManager.sh > /tmp/nohup_nodemanager.log 2>&1
# Stopping Node Manager 2 on node2

echo 
Stopping Node Manager 2 on node2
ssh node2hostname '/u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopNodeManager.sh'

# Stopping Weblogic Admin Server
echo Stopping Weblogic Admin Server
nohup /u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/stopWebLogic.sh > /tmp/nohup_adminserver.out 2>&1

echo Script completed.
echo Check /tmp for the script logs.. nohup_* files.

Helper Script for FRMRP_STOP.sh:

This script was written with phyton and it was written to be executed by WLST. This script is for stopping the OHS instance remotely. (for stopping node2's OHS instance from node1)

stopohs2.py script

nmConnect('nodemanager','xxxx','forms02.oracle.com','5556','base_domain','/u01/FMWHOME/oracle_home/user_projects/domains/base_domain','ssl');
nmKill(serverName='ohs2', serverType='OHS');
exit();

Thursday, September 28, 2017

EBS 11i -- Could not initialize class oracle.apps.fnd.common.Guest / a different kind of a problem and an easy fix.

Nowadays, I'm dealing with an EBS 11i-EXADATA migration. I have solved some pretty interesting issues during the way and wanted to share one of them with you.
It was encounted while we were migrating the TEST environment to Exadata.

The issue was started at the point, where the Dba applied 11i.ATG_PF.H.RUP7 (as a prereq for the migration)

Not only the Jserv logs (In EBS 11i, we have Jserv) and Oacore logs, but even the login page was complaining about the Guest class.

The error that we saw, was "Could not initialize class oracle.apps.fnd.common.Guest" and no matter what we did, we could not fix it. ( The error was documented in MOS, but the solution documented there didn't fix the problem)

So this issue was a a little different and that's why made me to jump to the code and analyze the Guest class.

The error text made me think that, there could be a classpath problem or there can be a class-permission problem , but the actual problem was suprisingly weird :)

I saw that, Guest class was written to execute fnd_web_sec.get_guest_username_pwd (it was enclosed with a begin-end).

So I checked the database and saw that the fnd_web_sec package had no function named get_guest_username_pwd.

The get_guest_username_pwd function seemed to be delivered with 11i.ATG_PF.H.RUP7(or any other patches along the way) and I concluded that there was sychronization problem between the apps code and the db code..

Apps code was expecting the get_guest_username_pwd but db code had no function named  get_guest_username_pwd .

At this time, I concluded that this was a db level problem and I also concluded that "could not initialize class" and "java.lang.NoClassDefFoundError" errors were misleading.. (they were the results, not the cause)

When I analyzed the issue and investigated the issue by asking the DBA, I found out that, after the patch application, they recreated this fnd_web_sec package with its former code. 
They said "we did it, because we had another custom plsql which was dependent on fnd_web_sec and that custom plsql could not work with the new version of the fnd_web_sec."

At this point, I recreated the fnd_web_sec by taking its code from another RUP7 environment and -told them to not to modify standard codes..  The missing function was there...

I told them to modify their custom code to be aligned with the changes in standard codes.

At the end of the day, we have dealed with a basic problem, but its reason could not be found easily. (a hard to solve basic problem, isn't it :)

The lesson  learned for the customer and that dba was;
  • Never touch the standard code.
  • Analyze patches to be applied before the applying them and test your customizations if you suspect that your customizations can be affected.
  • Document your customizations and check them after applying any patches.
  • Modify your custom code when a standard code that it is dependent on, changes.

Wednesday, September 27, 2017

EBS R12 -- XML publisher -- java.lang.OutOfMemoryError, the definitions of recommended properties

For big sized reports, Oracle recommends settings the following properties for XML publisher..
This is especially, when you encounter java.lang.OutOfMemoryError (Usually OPP gets its).

Set the following properties from XML Publisher Administration:

Responsibility=>Administration UI 

General => Temporary directory => \tmp 
This could be any directory with full read and write access 

FO Processing=> 
Use XML Publisher's XSLT processor =>true 
Enable scalable feature of XSLT processor=> true 
Enable XSLT runtime optimization=>true
2. The above properties can be set to "xdo.cfg" as well.

<property name="xslt-xdoparser">True</property>
<property name="xslt-scalable">True</property>

<property name="xslt-runtime-optimization">True</property>

Some of my followers asked about their definitions, and here they are:
  • Enable XSLT runtime optimization: When set to "true", the overall performance of the FO processor is increased and the size of the temporary FO files generated in the temp directory is significantly decreased. 
  • Use XML Publisher's XSLT processor: Controls XML Publisher's parser usage. If set to False, XSLT will not be parsed.
  • Enable scalable feature of XSLT processor: Controls the scalable feature of the XDO parser. The property "Use BI Publisher's XSLT processor" must be set to "true" for this property to be effective.

Tuesday, September 26, 2017

EBS 11i - compiling jsps, just a little info -> not a valid class_dir directory

We know that, we can compile jsps in EBS 11i manually. (by using perl -x $JTF_TOP/admin/scripts/ojspCompile.pl —compile —quiet)

We also know that, in EBS 11i; we can clear the jsp cache by deleting the _pages directory located in $COMMON_TOP.

However, there is a little important thing that we need to know, while planning to take these 2 actions.

That is, you can't just the clear jsp cache and then directly compile the jsps.

This is because osjpCompile.pl wants the $COMMON_TOP/_pages/_oa__html directory to be present, as it is designed to get this directory as its class_dir.

So, if we clear jsp cache (by running rm -fR $COMMON_TOP/_pages) and then run the osjpCompile.pl immediately, we end up with the following;

identifying apache_top.../TEST/testora/iAS
identifying apache_config_top.../TEST/testora/iAS
identifying java_home.../usr/java/jdk1.6.0_23
identifying jsp_dir.../TEST/testcomn/html
identifying pages_dir.../TEST/testcomn
identifying classpath...file:///TEST/testora/iAS/Apache/Jserv/etc/jserv.properties
"not a valid class_dir directory: (/TEST/testcomn/_pages/_oa__html)"


Well.. As seen above, we need to have the jsp cache to run the osjpCompile.pl.

In order to have our jsp cache back, we start apache and then using our browser; we reach the login page (reaching it once is enough)

After that, we see our  $COMMON_TOP/_pages/_oa__html directory is created. At this point; we can run osjpCompile.pl without any errors.

This was the tip of the day. I hope you will find it useful.

Wednesday, September 20, 2017

Problem installing Oracle FMW 12 - Error - CFGFWK-64254, ONS related error, oracle.jdbc.fanEnabled=false

Today, I was doing a 2 node Forms & Reports 12.2.1.3 Cluster on Solaris 11.3 Sparc 64 bit. and during the config.sh run, I encountered "CFGFWK-64254 error during OPSS Processing" phase execution.
The underlying error was "java.lang.IllegalArgumentException: ONS configuration failed"..
It was clearly related with RDBMS ONS. (Oracle Notification Service), but the database environment where I created the RCU schemas(forms and reports schemas), was a single node db environment and it was not configured with ONS.
So the error was unexpected and probably it was a bug. It was not documented and it motivated me for finding the fix.
The installer of Forms 12.2.1.3 ( or lets FMW) was, however; wanted to use ONS and it insisted on it..
In the previous config.sh screens, I actually did find a workaround for it.. That is, I could use the FAN related argument in those screens as those screens had textboxes for supplying java arguments.. (oracle.jdbc.fanEnabled=false)

However, when you fill all the config.sh installation forms and press the button "create", you can not use this workaround as there is nowhere to supply this java argument and you ended up with these ONS related errors.

The workaround ( in my opinion, it is a fix / it  is a patch) that for this is, to supply this argument in the config_internal.sh. (config.sh indirectly executes config_internal.sh)

What I did was to modify the config_internal.sh to include -Doracle.jdbc.fanEnabled=false
Ofcourse, I wrote it in the right place/line in that script and make the java use it.
This fixed the problem.
Tested and verified. :)

Monday, September 18, 2017

EBS 12.2 -- NFS-based Shared Application Filesystem -- What to do when the first node is down?

I already wrote a blog post about an important point to be considered when building a Shared Application Filesystem using NFS. (http://ermanarslan.blogspot.com.tr/2017/08/ebs-122-important-point-to-be.html)
This point should be considered especially, when we export the NFS shares from the first apps node and mount them from the second node.  (as instructed in Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2 (Doc ID 1375769.1) )

That is, in such a multi node shared application filesystem configuration; when our 1st node where the NFS mounts are hosted, is down, our EBS apps tier services gets down. 
This is an expected behaviour. It is caused by the first node being a single point of failure.So, if it goes down, the NFS shares go with it. 

However, we should be able to start our EBS apps tier services on the surviving nodes, right? 
This is an important thing, because the problem in the first node may not be resolved quickly.. 

Well. Here is the things that we should do to start the EBS apps tier services on the second apps node ,in such a scenario  ->

Note : these steps are for NFS-based shared application filesystem.

1) Map the apps luns to the second(surviving) node: This is a storage and OS tier operation. The luns that apps filesystem resides should be mapped to and mounted on the second node. 

2) Update the second node's apps tier context file and run autoconfig on the secondary apps node. 
There are 3 context value updates are neccessary : s_webhost, s_login_page and s_external_url.. This is because, these context file attributes is set to appstier1 by default.. "However, if we already implemented the Load Balancer configuration, then this means that these updates are already done and there is no need to do anyting in this step".

s_webentryhost  : appstier2
s_login_page : http://appstier2.company.com:8050/OA_HTML/AppsLogin on Application Server 2
s_external_url : http://appstier2.company.com:8050/

Note: modify the above apps node name (appstier) according to your second apps node's hostname..

3) Start the apps tier services using adstrtal.sh , but using the msimode argument.
($ADMIN_SCRIPTS_HOME/adstrtal.sh -msimode)

msi means managed server independence.. As the first node is down, our Admin server is down, so the managed servers (like oacore) can not be started on the second node unless using the msimode argument.. 
Without the msimode, managed servers will try to reach the admin server for reading their configuration and they will fail.. Without msimode, we see errors like "ERROR: Skipping startup of forms_server2 since the AdminServer is down", while executing the adstrtal.sh.

Here is the defition of msi mode (from Oracle):
When a Managed Server starts, it tries to contact the Administration Server to retrieve its configuration information. If a Managed Server cannot connect to the Administration Server during startup, it can retrieve its configuration by reading configuration and security files directly. A Managed Server that starts in this way is running in Managed Server Independence (MSI) mode.

Well..  As you see, in a NFS-based Shared application filesystem env, there are max 3 things to do for starting the apps tier services on the second node (supposing the first is crashed, down)

I tested this approach and it took me 15 minutes to complete.. Ofcourse it is dependent on the storage mapping and bunch of other factors but, it is certain that, there is a downtime there..

That's why , I recommend using non-shared APPL_TOP or shared APPL_TOP with ACFS filesystem or shared APPL_TOP with NFS shares that are coming directly from the storage :)

Thursday, September 14, 2017

ODA X6-2M -- virt-manager display problem/garbage characters // yum install dejavu-lgc-sans-fonts

This is an important little piece of information.
This is actually about Linux KVM (Kernel Based Virtual Machine), but as I 'm dealing with Oracle, I'm looking to it from Oracle perspective.
Yes.. The new ODA X6-2M, as you may already know, gives us the option to use the Linux KVM for enabling the virtualization.
This new KVM thing (it is new from Oracle perspective) has a GUI to manage the VM environment. It is management interface that eases the administration of the KVM environment (in ODA or in anywhere else)
It is called  Virtual Machine Manager and it is executed using the command virt-manager (using root).
As it is a GUI, it needs a X environment to run it.
In Oracle Linux world, as you may also agree, we mostly use vncserver for displaying the X screens remotely.
So, we connect to the vncserver (or we can use ILOM remote connection or anything that does the same thing) and execute the virt-manager to start the Virtual Machine Manager for KVM.
The issue starts here.
After the deployment of ODA and enabling the KVM, we run the virt-manager command and we see the garbage characters.
We actually see little squares rather than the characters and fonts.
Here is an example:


So, in order to fix this, we basically need to install the fonts that Virtual Machine Manager needs.
A simply yum command can do this work and this little piece of information may save you time :)

Fix: yum install dejavu-lgc-sans-fonts
Tested & Verified in the following ODA X6-2M environment :

System Version
---------------
12.1.2.11.0

Component                         Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                    12.1.2.11.0               up-to-date        
GI                                        12.1.0.2.170418       up-to-date        
DB                                       11.2.0.4.170418       up-to-date        
ILOM                                  3.2.7.26.a.r112632   3.2.9.23.r116695  
BIOS                                   38050100                 38070200          
OS                                        6.8                           up-to-date     

EBS/RAC -- setting TNS_ADMIN for srvctl , SQLNET.sqlnet.allowed_logon* , no matching protocol erros.

This post is actually a weird one :) , but I find it useful.
It is like a mix , as it is about EBS , it is about RAC, it is about SQLNET.sqlnet.allowed_logon* parameters, it is about IFILE settings and  it is about srvctl..
I m writing this one, because I was in the field dealing with the similar issues in almost every EBS-Exadata migrations.
In this post, I won't give all the instructions and definitions releated with the thing that I want to explain. That is, I will suppose that the readers of this post already know the following:
What local listener does, what RAC means, what the "srvctl" utility is , what the sqlnet.ora and the TNS_ADMIN env variable do, and ofcourse what the parameters SQLNET.sqlnet.allowed_logon* are used for.

Let's jump into our topic.
As you may know, we have the autoconfig utility in EBS environments.
This autoconfig utility regenerates some certain db related files, when it is run on the db tier.
Today, I will concantrate on sqlnet.ora.
My followers may recognize this from my earlier posts, but today, I 'm writing about something different, actually.
We know autoconfig regenerates sqlnet.ora in $ORACLE_HOME/network/admin/<context_name> directory and we know that we will lost the things that we manually write there. (after a dbtier autoconfig).
That's why, we know (it is also documented), we need to use IFILES.
So far so good.
What we also know is, in RAC we use the listeners that are running from the GRID home.
This is not a must but it is a recommended thing.
So our local listeners are  running from the GRID homes and we use IFILES in our TNS configuration files stored in GRID Homes and make Oracle to see the actual sqlnet.ora files that are maintained in our EBS RDBMS homes.
At the end of the day, we make Oracle to see what is stored in the IFILE. The actual tns configuration files which are maintained in the RDBMS home, right in the directory:"$ORACLE_HOME/network/admin/<context_name>".
Note that, these files are not regenerated by autoconfig so it is safe.

Well. For sqlnet.ora and other files, this story applies.
However; we need one more thing to do, if we are running our database in a RAC environment.
That is the cluster registration.
I mean we want our TNS_ADMIN directory to point to the GRID Home.
We want the following actually..
TNS_ADMIN -> GRID_HOME/network/admin ->  RDBMS_HOME/context_name/sqlnet.ora -> RDBMS_HOME/context_name/sqlnet_ifile.

In order to get this, we include the IFILE setting in GRID_HOME/sqlnet.ora , which points to the RDBMS_HOME/context_name.
the sqlnet.ora in RDBMS_HOME/context_name has  the IFILE setting for pointing the sqlnet_ifile stored in RDBMS_HOME/context_name. (this comes  by default.)

So far so good.
But at this point, we need to set the TNS_ADMIN right?
We can set it in our terminal using export TNS_ADMIN, but it will not work for srvctl.
In RAC, we mostly use srvctl to start listeners and databases.

So, we execute srvctl setenv to set this TNS_ADMIN environment variable.
This way, we make srvctl utility be aware of our TNS_ADMIN setting.

But for what will we set the TNS_ADMIN env variable? For listener or for database?
At the first glance , we may think that we must set it for the listener, for the local one... for the local one, using a command like srvctl setenv listener -l listener_name -T TNS_ADMIN=$GRID_HOME/network/admin

"However, we must actually set it for db." --Actually this info made me writing this post :)"

This is because; if we set the TNS_ADMIN for listener , then it is overwritten.

Here is the info from Oracle Support:
Ref: Dynamic Registration and TNS_ADMIN (Doc ID 181129.1)

At instance startup, PMON picks up the TNS_ADMIN environment variable (in  the same way that the listener does in Section (a) above). When PMON subsequently registers this instance, this value of TNS_ADMIN is passed to the listener; causing PMON's TNS_ADMIN value to overwrite the value the listener currently has.
If TNS_ADMIN is not set when PMON starts, then after registration, the listener's TNS_ADMIN value is cleared (ie, behaves as if not set).

So, this is for those who are trying to fix the "no matching protocol errors" -> you should make the sqlnet allowed_logon_* settings in the sqlnet_ifile that is stored in RDBMS_HOME/network/admin/context_name and then  set the TNS_ADMIN using a command like "srvctl setenv database -d DB_NAME -T TNS_ADMIN=blabla"... (not the listener)

Well. This is the tip of the day :)  Take care.