Issue while loading itext jar loaded in oracle
Hi Guys,
I have implemented the pdf generation using itextpdf 5.1.0. Its working in java and I could generate the pdf. But when I try to lod the jar file most of them are found to be uncompiled. when I investigated the error , I got the following
PDFGENERATOR:10: cannot access com.itextpdf.text.Element
class file has wrong version 49.0, should be 48.0
import com.itextpdf.text.Element;
Please remove or make sure it appears in the correct subdirectory of the classpath.
I am using Oracle database 10 G. and my JVM is 1.4. Is it possible to upgrade only the jvm of Oracle 10g from 1.4 to 1.5 or higher??
Regard,
Rahim P.K
Edited by: user7733307 on 20-Dec-2011 03:06
user7733307 wrote:
Hi Guys,
I have implemented the pdf generation using itextpdf 5.1.0. Its working in java and I could generate the pdf. But when I try to lod the jar file most of them are found to be uncompiled. when I investigated the error , I got the following
PDFGENERATOR:10: cannot access com.itextpdf.text.Element
class file has wrong version 49.0, should be 48.0
import com.itextpdf.text.Element;
Please remove or make sure it appears in the correct subdirectory of the classpath.
I am using Oracle database 10 G. and my JVM is 1.4. Is it possible to upgrade only the jvm of Oracle 10g from 1.4 to 1.5 or higher??
Regard,
Rahim P.K
Edited by: user7733307 on 20-Dec-2011 03:06You are on the right track thinking it is a JDK version problem. But this is seriously not a Java programming question, but an Oracle administration one.
Similar Messages
-
Unknown issue while loading .dbf file by sql loader
Hi guys,
I am having a unknown issue while loading .dbf file by sql loader.
I need to load .dbf data into oracle table.for this I converted .dbf file by just changing its extension as .csv . file structure after changing .dbf to .csv --
C_N_NUMBER,COMP_CODE,CPT_CODE,C_N_AMT,CM_NUMBER
1810/4,LKM,30,45,683196
1810/5,LKM,30,45,683197
1810/6,LKM,30,45,683198
1810/7,LKM,30,135,683200
1810/8,LKM,30,90,683201
1810/9,LKM,1,45,683246
1810/9,LKM,2,90,683246
1810/10,LKF,1,90,683286
2810/13,LKJ,1,50.5,680313
2810/14,LKJ,1,50,680316
1910/1,LKQ,1,90,680344
3910/2,LKF,1,238.12,680368
3910/3,LKF,1,45,680382
3910/4,LKF,1,45,680395
7910/5,LKS,1,45,680397
7910/6,LKS,1,90,680400
7910/7,LKS,1,45,680401
7910/8,LKS,1,238.12,680414
7910/9,LKS,1,193.12,680415
7910/10,LKS,1,45,680490
then I am loading it by sql loader.but I am getting always error below ...
Record 1: Rejected - Error on table C_N_DETL_TAB, column CPT_CODE.
ORA-01438: value larger than specified precision allowed for this column
Record 2: Rejected - Error on table C_N_DETL_TAB, column CPT_CODE.
ORA-01438: value larger than specified precision allowed for this column
table structure-
create table C_N_DETL_tab
"C_N_NUMBER" VARCHAR2(13),
"COMP_CODE" VARCHAR2(3),
"CPT_CODE" NUMBER(4),
"C_N_AMT" NUMBER(20,18),
"CM_NUMBER" NUMBER(7)
control file-
options(skip=1)
load data
infile '/softdump/pc/C_N_DETL.csv'
badfile '/softdump/pc/C_N_DETL.bad'
discardfile '/softdump/pc/C_N_DETL.dsc'
into table C_N_DETL_tab
truncate
FIELDS TERMINATED BY ","
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
C_N_NUMBER CHAR,
COMP_CODE CHAR,
CPT_CODE INTEGER,
C_N_AMT INTEGER,
CM_NUMBER INTEGER
but guys when I am increasing size of all columns of tabel upto its max value then data is loaded but when I am checking column max length after data loaded then its very less..
changed table structure-
create table C_N_DETL_tab
"C_N_NUMBER" VARCHAR2(130),
"COMP_CODE" VARCHAR2(30),
"CPT_CODE" NUMBER(32), ---- max value of number
"C_N_AMT" NUMBER(32,18), ---- max value of number
"CM_NUMBER" NUMBER(32) ---- max value of number
now i ma running ...
sqlldr express/express control=C_N_DETL.ctl log=C_N_DETL.log
o/p-
Table C_N_DETL_TAB, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
---------- ---- ---- C_N_NUMBER FIRST * , O(") CHARACTER
COMP_CODE NEXT * , O(") CHARACTER
CPT_CODE NEXT 4 INTEGER
C_N_AMT NEXT 4 INTEGER
CM_NUMBER NEXT 4 INTEGER
Table C_N_DETL_TAB:
20 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
select max(length( CPT_CODE))from C_N_DETL_tab ---> 9
can u tell me why I need to increase size of table columns upto max value?although length of data is soo much less.
kindly check it..thnx in advance...
rgds,
pcNo database version of course. Unimportant.
If I recall correctly, it is 'integer external ' and you would best double quoting the alphanumerics (which you didn't ).
Try changing integer in integer external in the ctl file.
Sybrand Bakker
Senior Oracle DBA -
Special character issue while loading data from SAP HR through VDS
Hello,
We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
ThanksWe are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
Yes, our data is coming from a Unicode system.
So, could it be a java parameter to change / add in the VDS??
Regards. -
Issue while loading berkeley database
Hi,
I have an issue while loading data into the Berkeley Database. When I load the xml files into Berkeley database, some files are being created. the files are named something like db.001,db.002,log.00000001 etc. I have a server running which tries to access the files .When I try to reload the Berkeley db while my server is running, I am unable to load the Berkeley database. I have to stop the server, load the berkeley DB and restart the server again . Is there any way in which I can reload the database without having to restart the server. Your response would be of help to me to find a solution to this issue. I am currently using Berkeley Database version 2.2.13
Thanks,
Priyadarshini
Message was edited by: Priyadarshini
user569257Hi Priyadarshini,
The db.001 and db.002 are the environment's region files and the log.00000001 is one of the environment's transactional logs. The region files are created when you use an environment, and their size and number depend on the subsystems that you configure on your environment (memory pool, logging, transactions, locking). The log files reflect the modifications that you perform on your environment's database(s), and they are used along with the transaction subsystem to provide recoverability, ACID capabilities and protect against application or system failures.
Is there a reason why that server tries to access these files ? The server process that runs while you load your database should not interfere with those files as they are used by Berkeley DB.
Regards,
Andrei -
Issue while loading the library files(".so" or ".sl") using JNI
Hi,
We are loading the c library files using system.load during the init phase of servlet.
While loading the application for the first time everything goes smooth and application behaves as expected.
We are facing the below issue when we try restarting the application through admin console in case of WAS (web sphere) for any patch deployment in the application.
java.lang.UnsatisfiedLinkError: Native Library /users/test1/siva/jnilib.so already loaded in another classloader.
If we restart the complete WAS every thing works fine.
There is no specific System.unload function available in java to remove the loaded library in JVM.
Is there any alternate way to unload the library which is loaded in class loader to resolve this issue which can be called in the destroy phase of servlet?
Any help here is highly apreciated.
TIA,
Siva.sivabalan wrote:
Hi,
We are loading the c library files using system.load during the init phase of servlet.
You mean a shared library, not C files (which would be source.)
However I am not sure that loading it in a servlet is a good idea. But that is a different issue.
Is there any alternate way to unload the library which is loaded in class loader to resolve this issue which can be called in the destroy phase of servlet?This is how it works on the Sun VM back to about 1.2. And as far as I know there is no other way for it to work on any other VM.
You have a class with native methods that relies on the shared library. The class and the shared library are loaded into a class loader. If the class loader is unloaded by the GC then the native library will be unloaded as well.
A class loader can be collected by the GC if if the classes loaded by it are no longer actively references. So ALL class instances can be collected.
If the above is true then by running System.gc() twice it will collect the classloader and thus the native library.
So in your situation it might work if your app server allows you do GC (full GC is better if that is an option). You could try unloading the app, then doing the GC option several times, then do a load. Try the GC option about 6 times and the reduce it down to see if 2 work.
If that doesn't work then there could be some programmatic solutions using the same idiom. -
Issue while loading Master Data through Process Chain in Production
Hi All,
We are getting an error in Process chain while loading Master Data
Non-updated Idocs found in Source System
Diagnosis
IDocs were found in the ALE inbox for Source System that are not updated.
Processing is overdue.
Error correction:
Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually.
I had checked the PSA also but I could not find any record and the strange thing is, Job itself is not getting scheduled. Can any one help me out in order to resolve this issue.
Regards
BhanumathiHi
This problem is not related to Process chain..
u can try this..
In RSMO, select the particular load you want to monitor.
In the menu bar, Environment >>> Transact. RFC >>> Select whichever is required, BW or Source System.
In the next screen you can select the Execute button and the IDOCS will be displayed.
Check Note 561880 - Requests hang because IDocs are not processed.
OR
Transact RFC - status running Yellow for long time (Transact RFC will be enabled in Status tab in RSMO).
Step 1: Goto Details, Status get the IDoc number,and go to BD87 in R/3,place the cursor in the RED IDoc entroes in tRFC
queue thats under outbound processing and click on display the IDOC which is on the menu bar.
Step 2: In the next screen click on Display tRFC calls (will take you to SM58 particular TRFC call)
place the cursor on the particular Transaction ID and go to EDIT in the menu bar --> press 'Execute LUW'
(Display tRFC calls (will take you to SM58 particular TRFC call) ---> select the TrasnID ---> EDIT ---> Execute LUW)
Rather than going to SM58 and executing LUW directly it is safer to go through BD87 giving the IDOC name as it will take you
to the particular TRFC request for that Idoc.
OR
Go into the JOB Overview of the Load there you should be able to find the Data Package ID.
(For this in RSMO Screen> Environment> there is a option for Job overview.)
This Data Package TID is Transaction ID in SM58.
OR
SM58 > Give * / user name or background (Aleremote) user name and execute.It will show you all the pending TRFC with
Transaction ID.
In the Status Text column you can see two status
Transation Recorded and Transaction Executing
Don't disturb, if the status is second one Transaction Executing. If the status is first one (Transation Recorded) manually
execute the "Execute LUWs"
OR
Directly go to SM58 > Give * / user name or background (Aleremote) user name and execute. It will show TRFCs to be executed
for that user. Find the particular TRFC (SM37 > Req name > TID from data packet with sysfail).select the TrasnID (SM58) --->
EDIT ---> Execute LUW
(from JituK)
Hope it helps
Darshan -
Issue while loading of data from DSO to InfoCube
Hi Experts,
Can you tell me what might root casue if data is coming into DSO from R3 its correct and fine as required but while loading it to InfoCube from DSO its showing wrong data like some of Line Items that were closed were shown open in Cube AND also KF values were not right
Also there is no Routine code involved b/w DSO and InfoCube.
Thanks in adv .
NPHope you didnt delete some req from DSO without deleting change log . This might cause inconsistency.
If so , delete data from dso by right click delete data and reload . -
Issue while loading data from DSO to Cube
Gurus,
I'm facing a typical problem while loading Cube from a DSO. The load is based upon certain conditions.
Though the data in DSO satisfies those condition, but the Cube is still not being loaded.
All the loads are managed through Process Chains.
Would there be any reason in specific/particular for this type of problem?
Any pointers would be of greatest help !
Regards,
Yaseen & SoujanyaYaseen & Soujanya,
It is very hard to guess the problem with the amount of information you have provided.
- What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
- How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
- Is there data already in the InfoCube?
- Is there change log data for DSO or did someone delete all the PSA data?
Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
Good luck.
Sudhi Karkada
<a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a> -
Issue while loading data to sample essbase app using odi
while executing data load the error is
Now, while loading data to an essbase app i am getting the following error:
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 23, in ?
com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
at org.python.core.PyMethod.__call__(PyMethod.java)
at org.python.core.PyObject.__call__(PyObject.java)
at org.python.core.PyInstance.invoke(PyInstance.java)
at org.python.pycode._pyx4.f$0(<string>:23)
at org.python.pycode._pyx4.call_function(<string>)
at org.python.core.PyTableCode.call(PyTableCode.java)
at org.python.core.PyCode.call(PyCode.java)
at org.python.core.Py.runCode(Py.java)
at org.python.core.Py.exec(Py.java)
at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.g.y(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)
Caused by: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
at com.hyperion.odi.essbase.ODIEssbaseDataWriter.sendRecordArrayToEsbase(Unknown Source)
... 32 more
com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.g.y(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)Hi,
It means in your options of the KM you have it set to quit when it hits one error, if you set it to 0 (infinite) then it will not stop no matter how many data load errors it hits.
If you set it to 0 and run the interface, depending on how you set up the options in the KM it can write to two log files, which you should check.
Cheers
John
http://john-goodwin.blogspot.com/ -
Hi,
When we launch the Exchange powershell via command prompt.. with below command:
PowerShell.exe -PSConsoleFile "D:\Program Files\Microsoft\Exchange Server\V14\Bin\ExShell.psc1"
it throws the following error:-
Following errors occured when loading the console
D:\Program Files\Microsoft\Exchange Server\V14\Bin\ExShell.psc1:
can not load windows powershell snap-in microsoft.exchange.management.powershell.e2010 because of the following error:
no snap-ins have been registered for windows powershell version 2
We are using Exchange server 2010 SP3 rollup 5 and EXCHANGE Server 2010 SP01.We see the issue on both the servers
Can you please provide the steps to reslove the issue.
Thanks & Regards,
Sanjeev GuptaI have got the resolution of the problem. In my environment, the Powershell.exe which was being launched from command prompt, was 32 bit exe, while it was needed 64 bit exe.
It was occuring because in system path there was 32 bit powershell.exe was present. So, after replacing the exe with 64 bit exe, the problem was solved.
Thanks for the help provided above..
Regards,
Sanjeev -
Issues while loading the Menu in Infoview for Performance Manager
Hi,
I have been seeing intermittent issues with the menu getting loaded when I click on the Performance Manager tab in Java Infoview. I am working on BOXI R2 SP2.8 on Windows 2003 Server with Oracle 10g as our Database. I have been working with BO tech Support to find out the resolution but so far they have been unsuccessfull in finding the root cause of the issue.
I have tried enabling the logs (Performance Manager) I have checked on the Tomcat Logs (Application Server) and have tried to check the AF_Verbose logs which atually pointed to errors with web.xml however there is no other error pertaining to it. The issue comes up intermittently and sometimes the menu does load successfully but most of the times it is not able to load the Menu for Corporate dashboard.
Any input on this would be very helpfull.First Change your Flat file Order like this
EMPID CID QTY UNIT PRICE REVENUE
Create datasource also like this same manner. -
Issue while loading crystal Report.
Hi ,
Crystal report is getting loaded properly in the VS Viewer for Crystal report when we preview the report but when the same report is being loaded through application then following errors are being displayed:
1) "Object reference not set to an instance of an object." - - In the debug mode
2) "Load Report failed." - In the release mode.
The issue is not getting reproduced when we are viewing report directly in the VS-2012. Also there is no exception being raised in the code.
Crystal report version : VS13_0_5You are using Service Pack 5, current SP is 9. I'd recommend updating your install to that - not that the error has anything to do with the SP, but you always want to be as up to date as possible(?).
If you Google "Object reference not set to an instance of an object.", you will get a lot of help. Usually the error is a programming issue.
Your best bet is to look at a few samples here:
Crystal Reports for .NET SDK Samples - Business Intelligence (BusinessObjects) - SCN Wiki
Developer Help files are here:
SAP Crystal Reports .NET SDK Developer Guide
SAP Crystal Reports .NET API Guide
I'd also suggest reviewing the document Crystal Reports for Visual Studio 2005 Walkthroughs. No worries about the reference to VS 2005. Pretty well everything in that document applies to all subsequent versions of .NET and CR.
- Ludek
Senior Support Engineer AGS Product Support, Global Support Center Canada
Follow us on Twitter -
Issue while loading master data from BI7 to BPC
Dear Experts,
I'm trying to load master data from BI7 to BPC NW using scenario 2 mentioned in the below document.
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00380440-010b-2c10-70a1-e0b431255827
My requirement is need to load 0GL_ACCOUNT attribute and text data from BI7 system to BPC.
1.As mentioned in the How to...doc I had created a dimension called GL_ACCOUNT using BPC Admin client .
2.Able to see GL_ACCOUNT in RSA1, when I try to create a Transformation(step 17 , page-40) to load Attribute data I could not find source of transformation object as 0GL_ACCOUNT(which exist in BI7) . I could only able to see only dimensions available in BPC system when I click F4 in Name.
What could be the reason to not getting BI infoobject as source in BPC?
Thanks in advance...
regards,
RajuDear Gurus,
My issue got resolved. So far I'm trying to pull data from R/3>BW>BPC. In the existing land scape BW and BPC are 2 different boxes. That is the reason I couldn't able to see BW objects into BPC (since 2 are different boxes). To resolve the issue I have created a new infoobect (RSD1) in BPC and data loading is from R/3>BPC infoobject(which is created through RSD1)>BPC Dimension.
Thanks and regards,
Raju -
Performance issue while loading 20 million rows
Hi all,
Loading done with 20 million rows into a table (which contains 173columns) using sql loader.
And direct=true has been used.
database is : 9i
OS : Sun OS 5.10 Sun V890, 16 cpu 32GB RAM
Elapsedtime is : 4 hours
But same volume is tried with the following details, into the same table (but columns are increased from 173 to 500)
Database : oracle 10.2.0.4.0 64 bit
OS : Sun Os 5.10 SUN-FIRE V6800, 24 cpu and 54GB RAM
Elapsed time : 6:06 hours
please tell me what could be the problem and how can I minimize the loading time?
Thanks in Advance
AnjiHi burleson,
AWR snap shot as follows.
DB Name DB Id Instance Inst Num Release RAC Host
REVACC 1015743016 REVACC 1 10.2.0.4.0 NO P4061AFMAP
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 342 16-Sep-09 19:30:53 38 2.7
End Snap: 343 16-Sep-09 20:30:07 36 2.6
Elapsed: 59.24 (mins)
DB Time: 195.22 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 9,184M 9,184M Std Block Size: 16K
Shared Pool Size: 992M 992M Log Buffer: 10,560K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,097,030.72 354,485,330.18
Logical reads: 23,870.31 7,713,251.27
Block changes: 8,894.16 2,873,984.09
Physical reads: 740.82 239,382.82
Physical writes: 1,003.32 324,203.27
User calls: 28.54 9,223.18
Parses: 242.99 78,517.09
Hard parses: 0.03 8.55
Sorts: 0.60 193.91
Logons: 0.01 3.45
Executes: 215.63 69,676.00
Transactions: 0.00
% Blocks changed per Read: 37.26 Recursive Call %: 99.65
Rollback per transaction %: 0.00 Rows per Sort: 7669.57
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 96.93 In-memory Sort %: 100.00
Library Hit %: 99.97 Soft Parse %: 99.99
Execute to Parse %: -12.69 Latch Hit %: 99.52
Parse CPU to Parse Elapsd %: 91.82 % Non-Parse CPU: 99.64
Shared Pool Statistics Begin End
Memory Usage %: 44.50 44.46
% SQL with executions>1: 85.83 84.78
% Memory for SQL w/exec>1: 87.15 86.65
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 11,632 99.3
db file scattered read 320,585 210 1 1.8 User I/O
SQL*Net more data from client 99,234 164 2 1.4 Network
log file parallel write 5,750 149 26 1.3 System I/O
db file parallel write 144,502 142 1 1.2 System I/O
Time Model Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
-> Total time in database user-calls (DB Time): 11713.1s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 11,631.9 99.3
sql execute elapsed time 3,131.4 26.7
parse time elapsed 53.7 .5
hard parse elapsed time 1.2 .0
connection management call elapsed time 0.3 .0
hard parse (sharing criteria) elapsed time 0.1 .0
sequence load elapsed time 0.1 .0
repeated bind elapsed time 0.0 .0
PL/SQL execution elapsed time 0.0 .0
DB time 11,713.1 N/A
background elapsed time 613.1 N/A
background cpu time 454.5 N/A
Wait Class DB/Inst: REVACC/REVACC Snaps: 342-343
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
User I/O 562,302 .0 304 1 51,118.4
System I/O 166,468 .0 295 2 15,133.5
Network 201,009 .0 165 1 18,273.5
Application 60 .0 5 82 5.5
Configuration 313 .0 4 12 28.5
Other 1,266 .0 3 2 115.1
Concurrency 9,305 .0 2 0 845.9
Commit 60 .0 1 21 5.5
Wait Events DB/Inst: REVACC/REVACC Snaps: 342-343
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file scattered read 320,585 .0 210 1 29,144.1
SQL*Net more data from clien 99,234 .0 164 2 9,021.3
log file parallel write 5,750 .0 149 26 522.7
db file parallel write 144,502 .0 142 1 13,136.5
db file sequential read 207,780 .0 93 0 18,889.1
enq: RO - fast object reuse 60 .0 5 82 5.5
write complete waits 135 .0 3 23 12.3
control file parallel write 2,501 .0 3 1 227.4
rdbms ipc reply 189 .0 2 12 17.2
control file sequential read 13,694 .0 2 0 1,244.9
buffer busy waits 8,499 .0 1 0 772.6
log file sync 60 .0 1 21 5.5
direct path write 8,290 .0 1 0 753.6
SQL*Net message to client 100,882 .0 1 0 9,171.1
log file switch completion 13 .0 0 38 1.2
os thread startup 2 .0 0 174 0.2
direct path read 25,646 .0 0 0 2,331.5
log buffer space 161 .0 0 1 14.6
latch free 7 .0 0 24 0.6
latch: object queue header o 180 .0 0 1 16.4
log file single write 11 .0 0 9 1.0
SQL*Net more data to client 893 .0 0 0 81.2
latch: cache buffers chains 767 .0 0 0 69.7
row cache lock 36 .0 0 2 3.3
LGWR wait for redo copy 793 .0 0 0 72.1
reliable message 60 .0 0 1 5.5
latch: cache buffers lru cha 11 .0 0 1 1.0
db file single write 1 .0 0 10 0.1
log file sequential read 10 .0 0 1 0.9
latch: session allocation 18 .0 0 1 1.6
latch: redo writing 4 .0 0 0 0.4
latch: messages 7 .0 0 0 0.6
latch: row cache objects 1 .0 0 0 0.1
latch: checkpoint queue latc 1 .0 0 0 0.1
PX Idle Wait 13,996 100.5 27,482 1964 1,272.4
SQL*Net message from client 100,881 .0 23,912 237 9,171.0
Streams AQ: qmn slave idle w 126 .0 3,442 27316 11.5
Streams AQ: qmn coordinator 255 50.6 3,442 13497 23.2
Streams AQ: waiting for time 1 100.0 545 544885 0.1
class slave wait 2 .0 0 2 0.2
Background Wait Events DB/Inst: REVACC/REVACC Snaps: 342-343
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 5,750 .0 149 26 522.7
db file parallel write 144,502 .0 142 1 13,136.5
control file parallel write 2,501 .0 3 1 227.4
direct path write 8,048 .0 1 0 731.6
control file sequential read 3,983 .0 0 0 362.1
os thread startup 2 .0 0 174 0.2
direct path read 25,646 .0 0 0 2,331.5
log buffer space 161 .0 0 1 14.6
events in waitclass Other 924 .0 0 0 84.0
log file single write 11 .0 0 9 1.0
db file single write 1 .0 0 10 0.1
log file sequential read 10 .0 0 1 0.9
db file sequential read 2 .0 0 5 0.2
latch: cache buffers chains 42 .0 0 0 3.8
latch: redo writing 4 .0 0 0 0.4
buffer busy waits 2 .0 0 0 0.2
rdbms ipc message 77,540 24.8 54,985 709 7,049.1
pmon timer 1,185 100.0 3,456 2916 107.7
Streams AQ: qmn slave idle w 126 .0 3,442 27316 11.5
Streams AQ: qmn coordinator 255 50.6 3,442 13497 23.2
smon timer 112 .0 3,374 30125 10.2
Streams AQ: waiting for time 1 100.0 545 544885 0.1
Operating System Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
Statistic Total
AVG_BUSY_TIME 161,850
AVG_IDLE_TIME 187,011
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 9,653
AVG_USER_TIME 152,083
BUSY_TIME 3,887,080
IDLE_TIME 4,491,132
IOWAIT_TIME 0
SYS_TIME 234,325
USER_TIME 3,652,755
LOAD 11
OS_CPU_WAIT_TIME 9,700
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 57,204,736
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 56,895,045,632
NUM_CPUS 24
Service Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
SYS$USERS 11,931.9 11,848.9 2,608,446 ##########
REVACC 0.0 0.0 0 0
SYS$BACKGROUND 0.0 0.0 25,685 34,096
Service Wait Class Stats DB/Inst: REVACC/REVACC Snaps: 342-343
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
SYS$USERS
525903 24088 9259 161 0 0 201012 16511
REVACC
0 0 0 0 0 0 0 0
SYS$BACKGROUND
36410 6292 46 35 0 0 0 0
I will provide entire report.. if it is not sufficient
Thanks
Anji
Edited by: user11907415 on Sep 17, 2009 6:39 AM -
Idoc issue while loading the data into BI
Hello Gurus,
Initially i am having the Source System connection problem. After it was fixed by the basis i had followed the below process.
I am loading the data using the Generic extractor specifying the selection and its a full load. When i load the data from R3 PP application i found the below result in the monitor screen of BW.
1. Job was completed in the R3 system and 1 million records are fetched by extractor.
2. the records are not posted to the BW side Because of TRFC issue.
3. i got the idoc number and process it in the BD87 tCode. But it was not process sucessfully it gives the following error " Error when saving BOM. Please see the application log."
4. when i check the Application Log using the Tcode SLG1 with the help of time and date of that particular process it was in the yelow status.
Kindly let me know i can i resolve this issue. i have already tried by repeating the infopackage job still i am facing the same issue. I have also check the connection its is ok.
Regardshello veerendra,
Thanks for your quick response. yes i am able to process it manually. after processing it was ended with the status 51 application not posted.
could you pls help me out with the same
regard
Edited by: KK on Nov 4, 2009 2:19 AM
Edited by: KK on Nov 4, 2009 2:28 AM
Maybe you are looking for
-
received an imac g3 from a friend with no hard drive, bought a used one that is empty formatted hfs+. this is for my daughter in her bed room, she will be on the net. i have the same exact model running 10.3.9. i want to copy or clone my hard drive t
-
Hi Experts, Am pretty new to EDIFACT messages.Below are my issues. 1.When i drop the below file with segment ending 'CR, message is working fine. for this am using this EDFACT delimeter in pipeline 0x3A, 0x2B, 0x2C, 0x3F, 0x2A, 0x27, 0x0D, 0x0A Now
-
Mac won't connect to internet after sleep
Hi, I've seen this problem to more and more users. After i have updated Mac OS to 10.5.7 my Macbook won't connect to my airport express after waking from sleep. Even worse, when i wake my mac, all the computers from my wireless network loose their co
-
Onblur get values for form field from table.
Hello, I created one from with report for TableA.TableA has 5 column. Now after i enter first field ,on blur, i want 2 other fields should get populated.They should get the 2 values from TableB.Table A and Table B are connected through primary key fo
-
Adobe Indesign CS6 Pricing - Multiple Users
Hi all, I would like to know in detail about the price plans for Indesign version CS6 with multiple user license (maximum 4). Please note that, We dont need the creative suite or the CC. Reply will be highly appreciated.