Answer: Well, how much time I gain? :)
Click here to ask a question", which is available on the main page of Erman Arslan's Oracle Blog -- or just use the direct link:
-------------- "Erman Arslan's Oracle Forum is available now. Click here to ask a question. " --------------
Answer: Well, how much time I gain? :)
In this case, having AI inside the database may be considered as the Kepler Moment for enterprise data.
While the Transformer models are the engines of generation, Oracle provides the high-speed vector engine for retrieval. Same dot-product and similarity search operations that allow a transformer to find the 'dog' in a sentence are now happening directly inside the database.
So, it is a Keplerian shift: the database is no longer a passive storage room; it's an active participant that understands the gravitational pull between our data points. We are moving from "What happened?" to "What is the underlying pattern?"
Of course, a new turning point is still needed. Let's see when that turning point from 2017 will come again.
Therefore, I would like to conclude my writing with these words;
In a broader context, I think we should be waiting for a new fundamental shift, maybe another 2017 Moment, a fundamental shift that will take the AI world beyond the limits of sophisticated pattern matching. We should be looking for an architectural leap where reasoning, causal understanding, and the essence of human experience are no longer just simulated, but are inherently woven into the very fabric of the model.In a traditional architecture, even for a single block read, we (in the background) issue an I/O request through the operating system, and then we wait for the storage controller to process it. The data travels over the SAN network. Then, our database server's OS receives it, handles interrupts, and context switches. Finally, we get our data in our Oracle database instance.
Each of these steps adds latency. So, thousands of transactions, these delays (although there are micro) create significant bottlenecks. This is where Exadata's features comes into play.
With RDMA / Remote DMA / Remote Direct Memory Access, what happens is, we bypass the Kernel, and we go directly to the memory...So, Exadata completely re-engineers this critical path for OLTP I/O. Instead of the database server's CPU and OS being involved in every single I/O operation, Exadata leverages RDMA over RoCE interconnect.
With RDMA, the database server can directly access memory on the Exadata Storage Cells, and it gets the data it needs with only a minimal CPU involvement on Storage Cell's CPU or OS.
So, basically it bypasses the kernel, which means less context switching, fewer interrupts.
Okay.. Let's visit the subject of Persistent Memory (PMEM). Well, Exadata Storage Servers come equipped with PMEM, a revolutionary technology that sits between DRAM and Flash.Until next time, keep optimizing, keep questioning, and keep digging into those internals.
Feel free to share your thoughts here and your questions on my forum:
I want to share the solution that we implemented for fixing a recent issue. It was about Oracle Linux KVM and there was a misconfiguration issue with the SSL certificates of this problematic KVM environment. The certificates were attempted to be renewed manually, and the problem arose after that.
To quickly summarize the issue: KVM hosts were appearing in Down status within the OLVM (Oracle Linux Virtualization Manager) interface. Consequently, VM information and metadata were inaccessible.We followed the MOS Note - OLVM: How to Renew SSL Certificates that are Expired or Nearing Expiration (Doc ID 3006292.1), but! the OlvmKvmCerts.sh script was missing. So we created a SR, and got the script from Oracle Support. After that, the steps to the solution were as follows;
We renewed the certificates using the OlvmKvmCerts.sh script. (OlvmKvmCert.sh renew-all) -- executed on OLVM node.Recently, I saw people dealing with errors while configuring an Oracle GoldenGate Extract process against a Physical Standby database. In this post, I will share the cause of this error and the recommended architectural approach to resolve it.
*When starting the extract, the process abends with the following messages in the ggserr.log:
This error occurs when a GoldenGate Extract is configured to pull data directly from a physical standby database that is not properly set up for such operations. By design, traditional Extract requires access to specific redo log structures and supplemental logging.
A standard physical standby (Active Dataguard / ADG) is typically in a read-only state and does not inherently support direct extract operations in the same way a primary or logical standby does.
Well. This means it is not supported to make CDC with Goldengate from an physical standby including the Active Dataguard-based ones directly.
So if we need to offload the extraction process from our primary production system to a standby environment, we should consider using downstream capture configuration. I mean; GoldenGate Integrated Extract can be configured to work with Active Data Guard using a Downstream Capture configuration. In this setup, the mining process runs on a separate database (the downstream mining database). This also offloads the CPU and I/O overhead from the primary instance.
**In summary, if we need to make a standby database as the source for Goldengate, we should include a data mining server in to the picture. Production will also replicate to the data mining server, which is also an Oracle Database running in Read Write mode, and then Goldengate will capture from there..
Data mining server (an Oracle Database) is not a physical standby bytheway. It is configured to receive redo from the source, but it is in read write mode. Redo transport will still be there, logs are shipped over the network from the source database to the data mining server and the log mining server in the data mining database extracts the changes from the redo log (or archive) files and serves them to the GoldenGate extract process.That's tip for today. I hope it helps.
You are using Oracle Goldengate 21.3, the classical one (not the microservices architecture based one), and you want to monitor the activities of Goldengate?
You installed an Oracle Enterprise Manager, deployed the Goldengate Plugin, installed Jagent / Monitoring Agent 12.2.1.2 into the targets and configured them.
You saw PMSRVR and JAGENT processes in GGSCI outputs, you started them, and they were in RUNNING status, and you told yourself: okay, so far so good.
Then, you used Oracle Enterprise Manager's auto discovery for discovering thee monitoring agents by the using relevant information like the port , host and etc..
Oracle Enterprise Manager didn't get any errors and discovery completed successfully, but! nothing changed? Couldn't Goldengate Monitoring agents be discovered?
Then you jumped into the servers where you installed those agents and checked the listen port by using netstat and everything seemed fine.
However; when you checked the log of those agents, you saw something like the following.
Could not get WSInstance Information , [[So, OGG for high-speed, real-time capture and replication into a staging area, and ODI for complex, CDC-aware transformations into the final structure. At the end; we achieve an architecture that is both highly efficient and massively scalable.
In this blog post I share a real production incident and its resolution. While the issue was severe, proper troubleshooting methodology and rescue media made recovery possible without data loss.
An Oracle Linux 8.10 production server suddenly became unresponsive. The system would boot but freeze indefinitely at the graphical login screen, showing only the Oracle Linux logo with a loading spinner that never completed.
No amount of waiting helped. The system was completely inaccessible through normal means. SSH connections timed out, and the console remained locked at the authentication screen.
Our initial discovery was trough the emergency shell access. I mean, to diagnose the issue, we bypassed the normal boot process using GRUB emergency parameters:
# At GRUB menu, press 'e' on the kernel lineEvery foundational system command was broken. This was not a simple misconfiguration, this was a fundamental system library corruption.
GLIBC (GNU C Library) is the core system library that nearly every Linux program depends on. It provides essential functions for:
Without a working GLIBC, the system cannot function.
That's enough for the giving the background..
So, Oracle Linux 8.10 ships with GLIBC 2.28. However, our system's binaries were looking for GLIBC 2.33 and 2.34, which are part of Oracle Linux 9 (based on RHEL 9).[root@myprodddb01 /]# /lib64/libc.so.6 --version GNU C Library (GNU libc) stable release version 2.28
The library version was correct (2.28), but the programs themselves (rpm, yum, ping, dnf) were looking for libraries from Oracle Linux 9.
How did this happen? In our case this is not certain yet, but we have some clues and here is the list of possible causes for such a catastrophic situation:Anyways; to
fix the system, we needed rpm to reinstall packages. But! rpm itself was broken because it needed GLIBC 2.33. We
couldn't use yum or dnf for the same reason. Even
basic networking tools like ping were non-functional. The broken system could not fix itself.
The solution was in rescue mode recovery.
We booted from Oracle Linux 8 ISO, and entered Rescue Environment. The rescue environment automatically detected and mounted our system to /mnt/sysimage. The rescue environment provided working tools with correct GLIBC 2.28.
sh-4.4# /lib64/libc.so.6 --version GNU C Library (GNU libc) stable release version 2.28 Copyright (C) 2018 Free Software Foundation, Inc.This command listed all Oracle Linux 9 packages installed on our OL8 system.
And, we copied the GLIBC 23 libraries (and libcrypto) from rescue to our broken system.
cp -fv /lib64/libc-2.28.so /mnt/sysroot/lib64/
cp -fv /lib64/libc.so.6 /mnt/sysroot/lib64/
cp -fv /lib64/libm*.so* /mnt/sysroot/lib64/
cp -fv /lib64/libpthread*.so* /mnt/sysroot/lib64/
After these actions, we chrooted into the system to verify and tested the foundational commands, they were all run successfully
chroot /mnt/sysroot
rpm --version
ping -c 2 8.8.8.8
yum --version
rpm -q glibc
-- Expected: glibc-2.28-251.0.1.el8.x86_64
We rebooted and tested.exit
reboot
This fixed the issue but during the emergency shell access, we also reset the root password:
In emergency mode (init=/bin/bash);
mount -o remount,rw /
passwd root
# Enter new password
sync
reboot
Well, we fixed the issue and learned a few important things.