Troubleshooting Oracle 11.2 DB Service Res stoped ORA-12514 after ‘stuck archiver’ ORA-257
最近遇到一起因oracle 数据库的归档空间耗尽,导致部分应用连接数据库时提示ORA-12514错误,而使用监听上现在的服务名连接数据库提示ora-257, DBA应该知道,ORA-12514错误显然是应用连接使用的服务名在listener没有办法匹配,而ora-257是归档空间满无法完成归档,怎么会ora-12514又和归档联系起来了呢?
迁移Oracle到PostgreSQL一个语法报错,看看Oracle CBO Query Transformations
最近在迁移oracle到基于postgresql的国产库发现了一些兼容性问题,联想到oracle对问题SQL是多么包容,我不确认已经实现oracle语法兼容的数据库,又有多少支持了Oracle在SQL查询转换中的功能,尤其是一些不必要的查询消除. 如order by elimination、Join Elimination、Common Sub-expression Elimination、subquery elimination, 这里记录一下Oracle 10053 Trace中的优化器优化形为。
Alert: not null Defining Integrity Constraints or Check( xx is not null) Constraint 性能差异
最近在迁移oracle到一个基于PostgreSQL系国产数据库时,发现存在一个问题,为了增加数据迁移顺利,把在oracle中在表列上定义的not null约束,全部更改了外部的CHECK not null 约束,这样就可以在建表、导数完成后再增加check约束。虽然这种功能上几乎一样,但是在性能上还是有一些区别,这篇简单的记录。
Migrate Oracle to PostgreSQL (系): start with connect by prior order siblings by
我和我的团队一直在做oracle到国产新创的工程,在oracle数据库中如加载菜单或上下关系的记录时,使用一种start with connect by prior的关键字提供分层查询和遍历方法, 但是对于基于Postgresql的数据库中需要改写,可以借用通用表表达式(CTE)的方法with RECURSIVE(), 同时对于排序时ORDER SIBLINGS BY可以对分层查询分组排序。这里简单的演示。
Troubleshooting ORA-04031: unable to allocate 12312 bytes of shared memory (“shared pool”,”unknown object”,”KKSSP^xx”,”kglseshtTable”)
最近在客户的环境中遇到一个较为罕见的问题,数据库实例的alert日志持续报告ORA-4031错误,导致一系列级联问题,包括ORA-600错误、归档失败、ASM实例通信失败等现象。同时,INACTIVE连接数异常增加,进一步可能出现数据库链接数耗尽。问题的核心出现在ASM实例,日志中反复出现ORA-4031错误kglseshtTable ,尽管ASM实例的SGA已经配置为4GB,这个大小在正常情况下应当足够使用,但此次却是首次出现此类问题。
该环境为Oracle 11.2.0.3版本,部署在2节点RAC架构下,运行于RHEL Linux 6.9操作系统,并且已启用HugePages。
Alert: not every datablock change version is saved in the flashback log in Oracle
Critical to performance is that not every change is copied to the flashback buffer— only a subset of changes. If all changes to all blocks were copied to the buffer, then the overhead in terms of memory usage and the amount of extra disk I/O required to flush the buffer to disk would be crippling for performance. Internal algorithms limit which versions of which blocks are placed in the flashback buffer, in order to restrict its size and the frequency with which it will fill and be written to disk.
Troubleshooting Oracle LGWR wait Event ‘reliable message’ and %sys CPU Usage High, instance crash during DSG running
最近一客银行客户的Oracle环境,在部署了DSG做数据抽取后,数据库频繁的重启,希望分析一下原因, 环境oracle 12c 2nodes- RAC on RHEL x86-64 7.3 , 数据库实例为Datatguard Pyhical Standby端,使用多租户。开始LGwr等待 ‘reliable message’,后出现IPC Send timeout detected, 过几分钟后实例2驱逐,不久后实例1 crash 。Oracle home和/ 使用XFS 文件系统。 问题期间大量进程活动,从ps查看处于D状态,并且WCHAN等待为xlog_G开头的函数调用,这里记录一下该事件。
Migrate Oracle to PostgreSQL (系): AS Table OF
在 PostgreSQL 中,PL/pgSQL 不直接支持 TABLE OF 类型,这是 Oracle PL/SQL 中的一种集合类型。不过,PostgreSQL 提供了其他方式来实现类似的功能,例如使用数组、复合类型(composite types)和表值函数(table-valued functions)。最近在一个生产数据库oracle迁移到postgresql替换过程中,遇到了拆分由逗号分隔的字符串返回table的需求
Oracle国产化改造迁移时的问题: Varchar data type中的 invalid 字符集字符编码
最近在从orale迁移到Postgresql系的数据库时,迁移工具(Java JDBC)几张表迁移失败,源库和目标都是GBK字符集,但在迁移过程中提示ERROR: character with byte sequence 0xe4 0xb8 0xad in encoding “UTf8” has no equivalent in encoding “GBK”,说明在GBK数据库中有部分列数据的值是UTF8编码,这种以GBK字符集查询会乱码,验证在源库入库时错误,迁移时需要注意。
Troubleshooting Oracle 12.1 ‘enq: TC – contention’ wait event and related issues
Recently, a customer’s Oracle database RMAN backup job failed. When doing a full backup, it took a long time to complete. After checking the database, it was found that the backup process was waiting for ‘enq: TC – contention’. The environment was Oracle 12.1 2-nodes RAC. I would like to record this problem.
Oracle国产化改造迁移时的问题: Date data type中的 invalid date 0000-00-00(zero year )
我和我的团队近2年一直在做迁移oracle到国产库的项目,数据迁移相对于PLSQL对象兼容改写更加容易,一般迁移工具做好源和目标的data type映射基本就可以,但是有一些情况,如《Oracle国产化改造迁移时的问题: Number data type中的 invalid number》记录的那样无效number,有些数据在oracle中可以存储,但迁移到目标库时可能无法存储,最近我们在迁移一套oracle到postgresql系的国产库时,在date数据类型的列出现了无效日期数据0000年。
Oracle Column Group Extended Statistics列组扩展统计信息
扩展统计信息(也称为列组扩展)是 Oracle 11g 中引入的重要统计信息改进之一。虽然 Oracle Cost Based Optimizer 能够获得正确的单列选择性估计,但它无法计算出查询谓词中存在的两个或多个相关列的联合的基数。为这种列的联合计算的列组扩展旨在帮助 CBO 弄清楚这种列的相关性,以便获得准确的估计。但在某些情况下,CBO 拒绝使用列组扩展。
How to reduce space of the largest object (table system.logmnr_restart_ckpt$)in System Tablespace
Today, I noticed that the customer’s system tablespace usage is quite large, currently around 3.5TB. The largest object is the system.Logmnr_restart_ckpt$ table, which is already close to 2TB in size. The next largest is the aud$unified table used for unified auditing. In my blog yesterday 《Know more about Unified Auditing in Oracle 19c》
Know more about Unified Auditing in Oracle 19c
Today, a customer encountered a database issue in an Oracle 19c 4-node RAC environment on an Oracle Exadata machine. The database is experiencing a high number of active sessions—thousands in total—indicating waits for ‘enq: hw contention’ and ‘enq: tx contention.’ The blocked business session is executing the SQL statement “insert into AUD$UNIFIED…,” related to unified auditing of the database
How to diagnose slow performance or long execution times with Oracle Data Pump (expdp)?
You can easily track the duration of each export/import operation by directing the export/import job to write timestamps to the logfile using the LOGTIME parameter. For more details, refer to Expdp/Impdp LOGTIME.
However, simply having this information alone is often insufficient, even if you know there was a version or operating system change. What’s really needed to diagnose or analyze performance is concrete data—and that’s where the METRICS and LOGTIME parameters come in handy.
Troubleshooting ‘local write wait’ wait event during Truncate table
I have seen problems with Local Write Wait in the Oracle database, normal tables in the databases were being used for temporary working storage before that data was then written to another table. The content of the working storage tables was then cleared out by periodically truncating them.
Migrate oracle to openGauss/oceanbase/达梦/kingbase: md5 function
在十年前简单测试过oracle 9i 的加密解密用法之dbms_obfuscation_toolkit(二),其中有md5单向加密, 最近在oracle迁移到opengauss项目中用到了md5, 这里简单记录替换方案,在pg或og中直接就有md5 function. 在mysql及Mysql系的产品和ocenabse, 达梦一样存在该函数md5。
Migrate oracle to openGauss: dbms_crypto.encrypt /decrypt functions
在oracle迁移opengauss数据库时,可能会遇到在oracle数据库中使用dbms_crypto 加密的数据 ,在目标数据库opengauss有时也不需要完全等同,仅实现加密功能即可,需要我们改写对应的存储过程,或自定义包装function, 也需要合理规划数据迁移的一些方法,比如需要先解密,在目标库重新加密,尤其是加密方法不同,避免迁移源加密数据到目标库后无法解密,当然如果应用层能实现加密功能那是极好的
Migrate oracle to openGauss: cast_to_raw/cast_to_varchar2 & base64_encode/base64_decode functions
我和我们的团队最近在迁移oracle到openGauss(postgresql)时现在有一些存储过程中使用了加密函数,其中有一些涉及到编码的package 如utl_i18、utl_raw、utl_encode,对一些明文数值进行raw或base64编码,这里记录一下oracle到opengauss后对应的函数实现, 基本也适用于postgresql,下一篇会记录加密函数。
v$active_session_history slow, 如何查询v$fixed_view_definition中的全文本?
最近一个客户oracle数据库中大量的活动会话在查询v$active_session_history, 原因是一个监控软件的刷新ASH数据,但是该SQL正常也都是秒回,该数据库一次查询近6分钟,好奇要分析一下这个案例,但因为客户环境无法远程,所以无分析过程,这里记录我的分析思路,同时在分析v$active_session_history 发现取完整的SQL定义并不是很容易,记录oradebug peek , objdump, gdb等方式读取内存中FIXED VIEW定义的方法。