DataPump Export (EXPDP) Error ORA-39095 Dump File Space Has Been Exhausted Possible solution
DataPump Export (EXPDP) Error ORA-39095 Dump File Space Has Been Exhausted
http://www.anbob.com/archives/2350.html
Question:
I am geting the ora-39095 error when trying to expdp an schema, message like this
”
ORA-39095 : ” dump file space has been exhausted. Unable to allocate 8192 bytes”
”
HOW do I fix the ORA-39095 error?
Answer:
There are several likely reasons for this error:
1, DataPump export where filesize parameter is used to limit the size of the dumpfiles, Data Pump export currently assumes that all metadata for an object must be written into one dump file. There is no support for splitting the metadata across dump files. There are some metadata objects that require hundreds of megabytes to describe them. If a dump file is on a file system with a 2GB file size limit, then there would be no way to export the metadata for that object. DataPump already allows table data to be split across multiple files.
Solution :
Do not use the FILESIZE parameter during export.
2, If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns.for example this
expdp anbob/anbob schema=weejar filesize=10m directory=d dumpfile=b.dmp
…
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
..
Solution :
Use a bigger “filesize” parameter value or to specified “dumpfile” parameter Increase the total dumpfile number ,or Use the dynamic format %U file name .
Note that %U specification for the dump file can expand up to 99 files. If dumpfile to be produced is greater than FILESIZE*99 then error will return. to use another option like this ” dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp ”
3, When Combining With Compression Tools In Scheduled OS script,Using a script (batch or shell script ) to perform a DataPump export and to compress the dmp files using some compress tools (zip, gzip … other),This is a random issue, happening only sometimes while the expdp script finishes successfully other times.
Solution:
This is a synchronization issues during the compress or move command. Provide the expdp command enough time to close all the files before starting to compress with tools like zip, gzip or others. This can be accomplished by introducing a ‘sleep 180’ in Unix or ‘timeout /t 180’ command in Windows.
4, When using one dumpfile or a number less than parallelism value, several slave processes wait for the file locked by the other process to write. And so we are not benefiting from the parallelism anyway.Sometimes the slave process locking the file does not release the lock after finishing as it’s supposed to release the lock when the dump process ends, and it’ll not end cause the other processes are waiting to write to the file.
Solution:
Use a number of dump files equal to, or more than the parallelism value ,or Don’t use PARALLEL clause.
5, The jobs may remain in the DataPump export tables, expdp will to create a export job table automatically in schema of expdp user used. The table name like SYS_EXPORT_SCHEMA_nn.
Solution:
To use another userid to expdp different schema . like to use system user to expdp anbob schema ,not user anbob self.
6. etc..
References MOS note
对不起,这篇文章暂时关闭评论。