Execution Time of Mapping
Hi All,
While we are executing Mapping multiple times through Control Centre in Local server.
The first execution time of the mapping is taking less time than the second/third execution time of the same mapping.
Thanks in Advance...
Hi,
The mapping execution will depend on a lot of db objects. In Dev environment there is no control and so the time may increase. Some of the factors may be the table avability i.e it may be used by another mapping and ur query may be on a wait, number of process running i.e ur process may have to wait for some other process to complete, multiple people using the tables, tables may not be analysed etc etc etc...so DEV running time may be entierly accurate. If there is a difference in time in a more controlled environment then we can analyze the reason.
Regards
Bharadwaj Hari
Similar Messages
-
Logging start & end time of map execution
Hello,
I want to log start & end time of execution of my map (OWB 11g), so I've created a table for this purpose and I used it in every map that I want to log time, twice; First for logging start time, and second for end time.
I pass a constant with SYSTIMESTAMP value through my log table and also name of my map. but the problem is, both of my records' time (start & end) are very near to each other (difference is in milliseconds!) however my map takes time for more than 2 minutes! So, I've changed my map Target Load Order to: [log table for start time] + [Main tables of my map] + [log table for end time]. I've set my map Use Target Load Ordering option True, too.
Why it doesn't work? Is there any better solution for logging every map execution time in a table, or not?
Please help me ...
Thanks.To do that, I have created a view that lists all processes that are running or finished. The view contains fields:
process_name
process_type (plsqlmap, plsqlprocedure, processflow, etc)
run_status (success, error, etc)
start_time
end_time
elapse_time
inserted
updated
deleted
merged
You could insert into your log table using select x from this view after every map, or, how I do it, is to insert into log table after every process flow. That is, after my process flow is complete I then select all of the details for the maps of the process flow and insert those details into my log table.
Here is the SQL for my view. This is for 10.2.0.3. For
CREATE OR REPLACE FORCE VIEW BATCH_STATUS_LOG_REP_V
AS
(SELECT PROCESS_NAME,
PROCESS_TYPE_SYMBOL,
(CASE
WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_OK', 'COMPLETE') THEN 'SUCCESS'
WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_FAILURE') THEN 'ERROR'
WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_OK_WITH_WARNINGS') THEN 'WARNINGS'
ELSE 'NA'
END
) RUN_STATUS_SYMBOL,
START_TIME,
END_TIME,
ELAPSE_TIME,
NUMBER_RECORDS_INSERTED,
NUMBER_RECORDS_UPDATED,
NUMBER_RECORDS_DELETED,
NUMBER_RECORDS_MERGED
FROM OWB_RUN.RAB_RT_EXEC_PROC_RUN_COUNTS
WHERE TRUNC (START_TIME) >= TRUNC (SYSDATE) - 3)
ORDER BY START_TIME DESC; -
HI All,
we have been Created mapping. It contains only one expression. in that we are doing To_char conversion and triming the data.
The target table structure contains 99 Columns. In that one column is Unique.
The count of records at source level is 3 Lakhs + records,
For that reason we had created two mappings. One is for initial Load and another to load last 30 days data.
We had created Process flow, to load last 30 days data. based on fileter *(Updatedate>=(sysdate-30)* and it is taking execution time is 3 Hrs + (12000+)
we tried by reducing last 7 days data to load into Target . It is taking execution time is 3 Hrs + to load (3000+ records)
How we can reduce the execution time.
Regards,Try indexing the Updatedate column if the % of data that you're retrieving is small compared to the total in the table.
Configure the mapping to run SET BASED.
Configure the mapping so that the DEFAULT AUDIT LEVEL is ERROR DETAILS or NONE.
Drop/disable indexes on target and rebuild afterwards.
Drop/disable/novalidate FKs on target and reapply afterwards.
Cheers
Si -
How to check mappings execution time in Process flow
Hi All,
We created one process flow and scheduled it. It is successfully completed after 30 Minutes.
Process flows contains 3 mappings, First mapping complete sucessfully, Second mapping will start, after completing successfully second mapping. Third mapping will start and complete sucessfully. Success emails will generate.
I would like to know which mapping is taking long time execution.
Could you please suggest how can we find which mapping is taking long time execution.
I dont like to run each mapping indiviual and see the execution time.
Regards,
Ava.Execute the below query in OWB owner or User schema
In place of '11111' give the execution id from control center.
select Map_run.NUMBER_RECORDS_INSERTED,
map_run.NUMBER_RECORDS_MERGED ,
map_run.NUMBER_RECORDS_UPDATED ,exe.execution_audit_id, Exe.ELAPSE_TIME,exe.EXECUTION_NAME,exe.EXECUTION_AUDIT_STATUS,map_run.MAP_NAME
from ALL_RT_AUDIT_MAP_RUNS Map_run,ALL_RT_AUDIT_EXECUTIONS Exe
where exe.EXECUTION_AUDIT_ID=map_run.EXECUTION_AUDIT_ID(+)
and exe.execution_audit_id > '11111'
order by exe.execution_audit_id descCheers
Nawneet
Edited by: Nawneet on Feb 22, 2010 4:26 AM -
ETL execution time want to reduce
Hi Everybody,
I am working on owb 10g with R2.
Environment is win 2003 server 64bit itanium server,
oracle 10 database in netap server mapped as I drive on 186 server where owb installed.
source files : oracle's staging schema
target : oracle target schema
Problem :
The problem is before 1 month our ETL process was taking 2 hrs to complete .
now a days 5 hrs...i dont know why.
any body suggest what I need to check in owb.
for optimization.Thanks for reply sir,
as you suggest a query for checking the execution time in desc order, I am sending you little bit o/p for today date execution.
MAP_NAME
START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
"M_CONTRACT_SUMMARY_M2__V_1"
20-NOV-07 20-NOV-07 1056 0 0
346150 0 346052
0 0 0
MAP_NAME
START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
"M_POLICY_SUSPENCE_V_1"
20-NOV-07 20-NOV-07 884 0 0
246576 0 0
0 0 246576
MAP_NAME
START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
"M_ACTIVITY_AMT_DETAIL_M3_V_1"
20-NOV-07 20-NOV-07 615 0 0
13927 13927 0
0 0 0
==================================
I think Elapse time depend on No of rec selected and inserted merge wahtever be...if rec are reduce then time also reduce but compare to before (when ETL got finished within 2 hrs), so we got more than 100 sec's diffrence b/w that time and now .
source tables analyzed daily before mapping execution started. and target tables analyzed at evening time .
As a remeber from last that day nothing any major changes made in ETL mappings. one day there was a problem arise that source_loc for another Process Wonders ( As i told before there are total 3 main Process runs Sun , Wonders and Life_asia,in which sun and wonders are scheduled) so we have correct that loc and deployed the all mappings as requier msg from control center.
then mappings runs fine but Execution time increased by 1 hrs more(5 hrs+) than before (3-4hr).
and normal time was
2 hrs for LifeAsia.
30 mnt for wonders
15 mnts for Sun.
Can you Suggest me what i can do for temp/permanent solution of this problem.
according to our System config...
1 tb hdd.in which 2-300 gb free
4 gb ram
64 bit windows os
Used temp tablespace 99 % with auto-extendable
Used target table space 93-95%....
data load incrementaly daily.
load window was 5am to 8 am which is now a days going upto 12 .30 pm
after which matview going to refresh.
after which reports and cubes refresh.
So all process going to delay and this is live process .
suggest me if any info u want .
abt hardware config , we need to increase some...? like ram ....memory..etc.
@wait for reply... -
Loading jar files at execution time via URLClassLoader
Hello�All,
I'm�making�a�Java�SQL�Client.�I�have�practicaly�all�basic�work�done,�now�I'm�trying�to�improve�it.
One�thing�I�want�it�to�do�is�to�allow�the�user�to�specify�new�drivers�and�to�use�them�to�make�new�connections.�To�do�this�I�have�this�class:�
public�class�DriverFinder�extends�URLClassLoader{
����private�JarFile�jarFile�=�null;
����
����private�Vector�drivers�=�new�Vector();
����
����public�DriverFinder(String�jarName)�throws�Exception{
��������super(new�URL[]{�new�URL("jar",�"",�"file:"�+�new�File(jarName).getAbsolutePath()�+"!/")�},�ClassLoader.getSystemClassLoader());
��������jarFile�=�new�JarFile(new�File(jarName));
��������
��������/*
��������System.out.println("-->"�+�System.getProperty("java.class.path"));
��������System.setProperty("java.class.path",�System.getProperty("java.class.path")+File.pathSeparator+jarName);
��������System.out.println("-->"�+�System.getProperty("java.class.path"));
��������*/
��������
��������Enumeration�enumeration�=�jarFile.entries();
��������while(enumeration.hasMoreElements()){
������������String�className�=�((ZipEntry)enumeration.nextElement()).getName();
������������if(className.endsWith(".class")){
����������������className�=�className.substring(0,�className.length()-6);
����������������if(className.indexOf("Driver")!=-1)System.out.println(className);
����������������
����������������try{
��������������������Class�classe�=�loadClass(className,�true);
��������������������Class[]�interfaces�=�classe.getInterfaces();
��������������������for(int�i=0;�i<interfaces.length;�i++){
������������������������if(interfaces.getName().equals("java.sql.Driver")){
����������������������������drivers.add(classe);
������������������������}
��������������������}
��������������������Class�superclasse�=�classe.getSuperclass();
��������������������interfaces�=�superclasse.getInterfaces();
��������������������for(int�i=0;�i<interfaces.length;�i++){
������������������������if(interfaces[i].getName().equals("java.sql.Driver")){
����������������������������drivers.add(classe);
������������������������}
��������������������}
����������������}catch(NoClassDefFoundError�e){
����������������}catch(Exception�e){}
������������}
��������}
����}
����
����public�Enumeration�getDrivers(){
��������return�drivers.elements();
����}
����
����public�String�getJarFileName(){
��������return�jarFile.getName();
����}
����
����public�static�void�main(String[]�args)�throws�Exception{
��������DriverFinder�df�=�new�DriverFinder("D:/Classes/db2java.zip");
��������System.out.println("jar:�"�+�df.getJarFileName());
��������Enumeration�enumeration�=�df.getDrivers();
��������while(enumeration.hasMoreElements()){
������������Class�classe�=�(Class)enumeration.nextElement();
������������System.out.println(classe.getName());
��������}
����}
It�loads�a�jar�and�searches�it�looking�for�drivers�(classes�implementing�directly�or�indirectly�interface�java.sql.Driver)�At�the�end�of�the�execution�I�have�found�all�drivers�in�the�jar�file.
The�main�application�loads�jar�files�from�an�XML�file�and�instantiates�one�DriverFinder�for�each�jar�file.�The�problem�is�at�execution�time,�it�finds�the�drivers�and�i�think�loads�it�by�issuing�this�statement�(Class�classe�=�loadClass(className,�true);),�but�what�i�think�is�not�what�is�happening...�the�execution�of�my�code�throws�this�exception
java.lang.ClassNotFoundException:�com.ibm.as400.access.AS400JDBCDriver
��������at�java.net.URLClassLoader$1.run(URLClassLoader.java:198)
��������at�java.security.AccessController.doPrivileged(Native�Method)
��������at�java.net.URLClassLoader.findClass(URLClassLoader.java:186)
��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:299)
��������at�sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:255)
��������at�java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
��������at�java.lang.Class.forName0(Native�Method)
��������at�java.lang.Class.forName(Class.java:140)
��������at�com.marmots.database.DB.<init>(DB.java:44)
��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
��������at�com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
��������at�com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
Driver�file�is�not�in�the�classpath�!!!�
I�have�tried�also�(as�you�can�see�in�comented�lines)�to�update�System�property�java.class.path�by�adding�the�path�to�the�jar�but�neither...
I'm�sure�I'm�making�a/some�mistake/s...�can�you�help�me?
Thanks�in�advice,
(if�there�is�some�incorrect�word�or�expression�excuse�me)Sorry i have tried to format the code, but it has changed to �... sorry read this one...
Hello All,
I'm making a Java SQL Client. I have practicaly all basic work done, now I'm trying to improve it.
One thing I want it to do is to allow the user to specify new drivers and to use them to make new connections. To do this I have this class:
public class DriverFinder extends URLClassLoader{
private JarFile jarFile = null;
private Vector drivers = new Vector();
public DriverFinder(String jarName) throws Exception{
super(new URL[]{ new URL("jar", "", "file:" + new File(jarName).getAbsolutePath() +"!/") }, ClassLoader.getSystemClassLoader());
jarFile = new JarFile(new File(jarName));
System.out.println("-->" + System.getProperty("java.class.path"));
System.setProperty("java.class.path", System.getProperty("java.class.path")+File.pathSeparator+jarName);
System.out.println("-->" + System.getProperty("java.class.path"));
Enumeration enumeration = jarFile.entries();
while(enumeration.hasMoreElements()){
String className = ((ZipEntry)enumeration.nextElement()).getName();
if(className.endsWith(".class")){
className = className.substring(0, className.length()-6);
if(className.indexOf("Driver")!=-1)System.out.println(className);
try{
Class classe = loadClass(className, true);
Class[] interfaces = classe.getInterfaces();
for(int i=0; i<interfaces.length; i++){
if(interfaces.getName().equals("java.sql.Driver")){
drivers.add(classe);
Class superclasse = classe.getSuperclass();
interfaces = superclasse.getInterfaces();
for(int i=0; i<interfaces.length; i++){
if(interfaces[i].getName().equals("java.sql.Driver")){
drivers.add(classe);
}catch(NoClassDefFoundError e){
}catch(Exception e){}
public Enumeration getDrivers(){
return drivers.elements();
public String getJarFileName(){
return jarFile.getName();
public static void main(String[] args) throws Exception{
DriverFinder df = new DriverFinder("D:/Classes/db2java.zip");
System.out.println("jar: " + df.getJarFileName());
Enumeration enumeration = df.getDrivers();
while(enumeration.hasMoreElements()){
Class classe = (Class)enumeration.nextElement();
System.out.println(classe.getName());
It loads a jar and searches it looking for drivers (classes implementing directly or indirectly interface java.sql.Driver) At the end of the execution I have found all drivers in the jar file.
The main application loads jar files from an XML file and instantiates one DriverFinder for each jar file. The problem is at execution time, it finds the drivers and i think loads it by issuing this statement (Class classe = loadClass(className, true);), but what i think is not what is happening... the execution of my code throws this exception
java.lang.ClassNotFoundException: com.ibm.as400.access.AS400JDBCDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:198)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:186)
at java.lang.ClassLoader.loadClass(ClassLoader.java:299)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:140)
at com.marmots.database.DB.<init>(DB.java:44)
at com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
at com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
at com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
at com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
Driver file is not in the classpath !!!
I have tried also (as you can see in comented lines) to update System property java.class.path by adding the path to the jar but neither...
I'm sure I'm making a/some mistake/s... can you help me?
Thanks in advice,
(if there is some incorrect word or expression excuse me) -
How to get the execution time of a Discoverer Report from qpp_stats table
Hello
by reading some threads on this forum I became aware of the information stored in eul5_qpp_stats table. I would like to know if I can use this table to determine the execution time of a worksheet. In particular it looks like the field qs_act_elap_time stores the actual elapsed time of each execution of specific worksheet: am I correct? If so, how is this value computed? What's the unit of measure? I assume it's seconds, but then I've seen that sometimes I get numbers with decimals.
For example I ran a worksheet and it took more than an hour to run, and the value I get in the qs_act_elap_time column is 2218.313.
Assuming the unit of measure was seconds than it would mean approx 37 mins. Is that the actual execution time of the query on the database? I guess the actual execution time on my Discoverer client was longer since some calculations were performed at the client level and not on the database.
I would really appreciate if you could shed some light on this topic.
Thanks and regards
GiovanniThanks a lot Rod for your prompt reply.
I agree with you about the accuracy of the data. Are you aware of any other way to track the execution times of Discoverer reports?
Thanks
Giovanni -
How to get the total execution time from a tkprof file
Hi,
I have a tkprof file. How can I get the total execution time. Going through the file i guess the sum of "Total Waited" would give the total time in the section "Elapsed times include waiting on following events:"
. The sample of tkprof is given below.
SQL ID: gg52tq1ajzy7t Plan Hash: 3406052038
SELECT POSTED_FLAG
FROM
AP_INVOICE_PAYMENTS WHERE CHECK_ID = :B1 UNION ALL SELECT POSTED_FLAG FROM
AP_PAYMENT_HISTORY APH, AP_SYSTEM_PARAMETERS ASP WHERE CHECK_ID = :B1 AND
NVL(APH.ORG_ID, -99) = NVL(ASP.ORG_ID, -99) AND
(NVL(ASP.WHEN_TO_ACCOUNT_PMT, 'ALWAYS') = 'ALWAYS' OR
(NVL(ASP.WHEN_TO_ACCOUNT_PMT, 'ALWAYS') = 'CLEARING ONLY' AND
APH.TRANSACTION_TYPE IN ('PAYMENT CLEARING', 'PAYMENT UNCLEARING')))
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 442 0.08 0.13 0 0 0 0
Fetch 963 0.22 4.72 350 16955 0 521
total 1406 0.31 4.85 350 16955 0 521
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
1 1 1 UNION-ALL (cr=38 pr=3 pw=0 time=139 us)
1 1 1 TABLE ACCESS BY INDEX ROWID AP_INVOICE_PAYMENTS_ALL (cr=5 pr=0 pw=0 time=124 us cost=6 size=12 card=1)
1 1 1 INDEX RANGE SCAN AP_INVOICE_PAYMENTS_N2 (cr=4 pr=0 pw=0 time=92 us cost=3 size=0 card=70)(object id 27741)
0 0 0 NESTED LOOPS (cr=33 pr=3 pw=0 time=20897 us)
0 0 0 NESTED LOOPS (cr=33 pr=3 pw=0 time=20891 us cost=12 size=41 card=1)
1 1 1 TABLE ACCESS FULL AP_SYSTEM_PARAMETERS_ALL (cr=30 pr=0 pw=0 time=313 us cost=9 size=11 card=1)
0 0 0 INDEX RANGE SCAN AP_PAYMENT_HISTORY_N1 (cr=3 pr=3 pw=0 time=20568 us cost=2 size=0 card=1)(object id 27834)
0 0 0 TABLE ACCESS BY INDEX ROWID AP_PAYMENT_HISTORY_ALL (cr=0 pr=0 pw=0 time=0 us cost=3 size=30 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 350 0.15 4.33
Disk file operations I/O 3 0.00 0.00
latch: shared pool 1 0.17 0.17
********************************************************************************user13019948 wrote:
Hi,
I have a tkprof file. How can I get the total execution time.
call count cpu elapsed disk query current rows
total 1406 0.31 4.85 350 16955 0 521TOTAL ELAPSED TIME is 4.85 seconds from line above -
How to improve the execution time of my VI?
My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.
Bryan,
If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
LabVIEW Champion . Do more with less code and in less time . -
How to find out the execution time of a sql inside a function
Hi All,
I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
RETURN NUMBER
IS
exec_time NUMBER;
BEGIN
--Calculate the execution time for the incoming sql statement.
RETURN exec_time;
END function_name;
/Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
A simple test case shows the potential difference of work performed by the database:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Session altered.
SQL>
SQL> drop table count_test purge;
Table dropped.
Elapsed: 00:00:00.17
SQL>
SQL> create table count_test as select * from all_objects;
Table created.
Elapsed: 00:00:02.56
SQL>
SQL> alter table count_test add constraint pk_count_test primary key (object_id)
Table altered.
Elapsed: 00:00:00.04
SQL>
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.29
SQL>
SQL> set autotrace traceonly
SQL>
SQL> select * from count_test;
5326 rows selected.
Elapsed: 00:00:00.10
Execution Plan
Plan hash value: 3690877688
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5326 | 431K| 23 (5)| 00:00:01 |
| 1 | TABLE ACCESS FULL| COUNT_TEST | 5326 | 431K| 23 (5)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
419 consistent gets
0 physical reads
0 redo size
242637 bytes sent via SQL*Net to client
4285 bytes received via SQL*Net from client
357 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5326 rows processed
SQL>
SQL> select count(*) from (select * from count_test);
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 572193338
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| PK_COUNT_TEST | 5326 | 5 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Need to know how to find the last execution time for a function module
HI all
I need to know
1) How to find out the last execution time of the function module ?
say for eg. I have executed a func. module at 1:39pm. How to retrieve this time (1:39pm)
2) I have created 3 billing document in tcode VF01 i.e 3 billing doucment no. would be created in SAP TABLE "VBRP" b/w 12am to 12:30 am.
How to capture the latest SAP database update b/w time intervals?
3) Suppose I am downloading TXT file using "GUI_DOWNLOAD" and say in 20th record some error has happened. I can capture the error using the exception.
Is it possible to run the program once again from 21st records ? All this will be running in background...
Kindly clarify....
Points will be rewarded
Thanks in advance1.Use tcode STAT input as Tcode of Fm and execute .
2. See the billing documents are created in table VBRk header and there will always be Creation date and time.
VBRk-Erdat "date ., u can check the time field also
So now if u talk the date and time we can filter then display the records in intervals.
3. with an error exeption how is my txt download finished .
once exception is raised there will not be a download .
regards,
vijay -
How to reduce execution time ?
Hi friends...
I have created a report to display vendor opening balances,
total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
Plz help me. It's a very urgent report...
The coding is as below.....
report yfiin_rep_vendordetail no standard page heading.
tables : bsik,bsak,lfb1,lfa1.
type-pools : slis .
--TABLE STRUCTURE--
types : begin of tt_bsik,
bukrs type bukrs,
lifnr type lifnr,
budat type budat,
augdt type augdt,
dmbtr type dmbtr,
wrbtr type wrbtr,
shkzg type shkzg,
hkont type hkont,
bstat type bstat_d ,
prctr type prctr,
name1 type name1,
end of tt_bsik,
begin of tt_lfb1,
lifnr type lifnr,
mindk type mindk,
end of tt_lfb1,
begin of tt_lfa1,
lifnr type lifnr,
name1 type name1,
ktokk type ktokk,
end of tt_lfa1,
begin of tt_opbal,
bukrs type bukrs,
lifnr type lifnr,
gjahr type gjahr,
belnr type belnr_d,
budat type budat,
bldat type bldat,
waers type waers,
dmbtr type dmbtr,
wrbtr type wrbtr,
shkzg type shkzg,
blart type blart,
monat type monat,
hkont type hkont,
bstat type bstat_d ,
prctr type prctr,
name1 type name1,
tdr type dmbtr,
tcr type dmbtr,
tbal type dmbtr,
end of tt_opbal,
begin of tt_bs ,
bukrs type bukrs,
lifnr type lifnr,
name1 type name1,
prctr type prctr,
tbal type dmbtr,
bala type dmbtr,
balb type dmbtr,
balc type dmbtr,
bald type dmbtr,
bale type dmbtr,
gbal type dmbtr,
end of tt_bs.
************WORK AREA DECLARATION *********************
data : gs_bsik type tt_bsik,
gs_bsak type tt_bsik,
gs_lfb1 type tt_lfb1,
gs_lfa1 type tt_lfa1,
gs_ageing type tt_ageing,
gs_bs type tt_bs,
gs_opdisp type tt_bs,
gs_final type tt_bsik,
gs_opbal type tt_opbal,
gs_opfinal type tt_opbal.
************INTERNAL TABLE DECLARATION*************
data : gt_bsik type standard table of tt_bsik,
gt_bsak type standard table of tt_bsik,
gt_lfb1 type standard table of tt_lfb1,
gt_lfa1 type standard table of tt_lfa1,
gt_ageing type standard table of tt_ageing,
gt_bs type standard table of tt_bs,
gt_opdisp type standard table of tt_bs,
gt_final type standard table of tt_bsik,
gt_opbal type standard table of tt_opbal,
gt_opfinal type standard table of tt_opbal.
ALV DECLARATIONS *******************
data : gs_fcat type slis_fieldcat_alv ,
gt_fcat type slis_t_fieldcat_alv ,
gs_sort type slis_sortinfo_alv,
gs_fcats type slis_fieldcat_alv ,
gt_fcats type slis_t_fieldcat_alv.
**********global data declration***************
data : kb type dmbtr ,
return like bapireturn ,
balancespgli like bapi3008-bal_sglind,
noteditems like bapi3008-ntditms_rq,
keybalance type table of bapi3008_3 with header line,
opbalance type p.
SELECTION SCREEN DECLARATIONS *********************
selection-screen begin of block b1 with frame .
select-options : so_bukrs for bsik-bukrs obligatory,
so_lifnr for bsik-lifnr,
so_hkont for bsik-hkont,
so_prctr for bsik-prctr ,
so_mindk for lfb1-mindk,
so_ktokk for lfa1-ktokk.
selection-screen end of block b1.
selection-screen : begin of block b1 with frame.
parameters : p_rb1 radiobutton group rad1 .
select-options : so_date for sy-datum .
selection-screen : end of block b1.
********************************ASSIGNING ALV GRID
****field catalog for balance report
gs_fcats-col_pos = 1.
gs_fcats-fieldname = 'BUKRS'.
gs_fcats-seltext_m = text-001.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 2 .
gs_fcats-fieldname = 'LIFNR'.
gs_fcats-seltext_m = text-002.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 3.
gs_fcats-fieldname = 'NAME1'.
gs_fcats-seltext_m = text-003.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 4.
gs_fcats-fieldname = 'BALC'.
gs_fcats-seltext_m = text-016.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 5.
gs_fcats-fieldname = 'BALA'.
gs_fcats-seltext_m = text-012.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 6.
gs_fcats-fieldname = 'BALB'.
gs_fcats-seltext_m = text-013.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 7.
gs_fcats-fieldname = 'TBAL'.
gs_fcats-seltext_m = text-014.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 8.
gs_fcats-fieldname = 'GBAL'.
gs_fcats-seltext_m = text-015.
append gs_fcats to gt_fcats .
data : repid1 type sy-repid.
repid1 = sy-repid.
INITIALIZATION EVENTS ******************************
initialization.
*Clearing the work area.
clear gs_bsik.
Refreshing the internal tables.
refresh gt_bsik.
******************START OF SELECTION EVENTS **************************
start-of-selection.
*get data for balance report.
perform sub_openbal.
perform sub_openbal_display.
*& Form sub_openbal
text
--> p1 text
<-- p2 text
form sub_openbal .
if so_date-low > sy-datum or so_date-high > sy-datum .
message i005(yfi02).
leave screen.
endif.
select bukrs lifnr gjahr belnr budat bldat
waers dmbtr wrbtr shkzg blart monat hkont prctr
from bsik into table gt_opbal
where bukrs in so_bukrs and lifnr in so_lifnr
and hkont in so_hkont and prctr in so_prctr
and budat in so_date .
select bukrs lifnr gjahr belnr budat bldat
waers dmbtr wrbtr shkzg blart monat hkont prctr
from bsak appending table gt_opbal
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr
and budat in so_date .
if sy-subrc <> 0.
message i007(yfi02).
leave screen.
endif.
select lifnr mindk from lfb1 into table gt_lfb1
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr and mindk in so_mindk.
select lifnr name1 ktokk from lfa1 into table gt_lfa1
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
loop at gt_opbal into gs_opbal .
loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
gs_opfinal-bukrs = gs_opbal-bukrs.
gs_opfinal-lifnr = gs_opbal-lifnr.
gs_opfinal-gjahr = gs_opbal-gjahr.
gs_opfinal-belnr = gs_opbal-belnr.
gs_opfinal-budat = gs_opbal-budat.
gs_opfinal-bldat = gs_opbal-bldat.
gs_opfinal-waers = gs_opbal-waers.
gs_opfinal-dmbtr = gs_opbal-dmbtr.
gs_opfinal-wrbtr = gs_opbal-wrbtr.
gs_opfinal-shkzg = gs_opbal-shkzg.
gs_opfinal-blart = gs_opbal-blart.
gs_opfinal-monat = gs_opbal-monat.
gs_opfinal-hkont = gs_opbal-hkont.
gs_opfinal-prctr = gs_opbal-prctr.
gs_opfinal-name1 = gs_lfa1-name1.
if gs_opbal-shkzg = 'H'.
gs_opfinal-tcr = gs_opbal-dmbtr * -1.
gs_opfinal-tdr = '000000'.
else.
gs_opfinal-tdr = gs_opbal-dmbtr.
gs_opfinal-tcr = '000000'.
endif.
append gs_opfinal to gt_opfinal.
endloop.
endloop.
endloop.
sort gt_opfinal by bukrs lifnr prctr .
so_date-low = so_date-low - 1 .
loop at gt_opfinal into gs_opfinal.
call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
exporting
companycode = gs_opfinal-bukrs
vendor = gs_opfinal-lifnr
keydate = so_date-low
balancespgli = ' '
noteditems = ' '
importing
return = return
tables
keybalance = keybalance.
clear kb .
loop at keybalance .
kb = keybalance-lc_bal + kb .
endloop.
gs_opdisp-balc = kb.
gs_opdisp-bukrs = gs_opfinal-bukrs.
gs_opdisp-lifnr = gs_opfinal-lifnr.
gs_opdisp-name1 = gs_opfinal-name1.
at new lifnr .
sum .
gs_opfinal-tbal = gs_opfinal-tdr + gs_opfinal-tcr .
gs_opdisp-tbal = gs_opfinal-tbal.
gs_opdisp-bala = gs_opfinal-tdr .
gs_opdisp-balb = gs_opfinal-tcr .
gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
append gs_opdisp to gt_opdisp.
endat.
clear gs_opdisp.
clear keybalance .
endloop.
delete adjacent duplicates from gt_opdisp.
endform. " sub_openbal
*& Form sub_openbal_display
text
--> p1 text
<-- p2 text
form sub_openbal_display .
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER = ' '
I_BUFFER_ACTIVE = ' '
i_callback_program = repid1
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = ' '
I_CALLBACK_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_END_OF_LIST = ' '
I_STRUCTURE_NAME =
I_BACKGROUND_ID = ' '
I_GRID_TITLE =
I_GRID_SETTINGS =
IS_LAYOUT =
it_fieldcat = gt_fcats
IT_EXCLUDING =
IT_SPECIAL_GROUPS =
IT_SORT =
IT_FILTER =
IS_SEL_HIDE =
I_DEFAULT = 'X'
I_SAVE = 'X'
IS_VARIANT =
it_events =
IT_EVENT_EXIT =
IS_PRINT =
IS_REPREP_ID =
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
IT_ALV_GRAPHICS =
IT_HYPERLINK =
IT_ADD_FIELDCAT =
IT_EXCEPT_QINFO =
I_HTML_HEIGHT_TOP =
I_HTML_HEIGHT_END =
IMPORTING
E_EXIT_CAUSED_BY_CALLER =
ES_EXIT_CAUSED_BY_USER =
tables
t_outtab = gt_opdisp
exceptions
program_error = 1
others = 2
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
endform. " sub_openbal_displayI think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
Try to check before using for all entries in the select statement like
if gt_opbal is not initial.
select adfda adfadf afdadf into table
for all entries in gt_opbal.
else.
select abdf afad into table
from abcd
where a = 1
and b = 2.
endif.
i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries. -
Reduce execution time with selects
Hi,
I have to reduce the execution time in a report, most of the consumed time is in the select query.
I have a table, gt_result:
DATA: BEGIN OF gwa_result,
tknum LIKE vttk-tknum,
stabf LIKE vttk-stabf,
shtyp LIKE vttk-shtyp,
route LIKE vttk-route,
vsart LIKE vttk-vsart,
signi LIKE vttk-signi,
dtabf LIKE vttk-dtabf,
vbeln LIKE likp-vbeln,
/bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
vkorg LIKE likp-vkorg,
werks LIKE likp-werks,
regio LIKE kna1-regio,
land1 LIKE kna1-land1,
xegld LIKE t005-xegld,
intca LIKE t005-intca,
bezei LIKE tvrot-bezei,
bezei1 LIKE t173t-bezei,
fecha(10) type c.
DATA: END OF gwa_result.
DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
And the select query is this:
SELECT ktknum kstabf kshtyp kroute kvsart ksigni
k~dtabf
lvbeln l/bshm/le_nr_cust lvkorg lwerks nregio nland1 oxegld ointca
tbezei ttbezei
FROM vttk AS k
INNER JOIN vttp AS p ON ktknum = ptknum
INNER JOIN likp AS l ON pvbeln = lvbeln
INNER JOIN kna1 AS n ON lkunnr = nkunnr
INNER JOIN t005 AS o ON nland1 = oland1
INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
INTO TABLE gt_result
WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
l~/bshm/le_nr_cust <> ' ' "IS NOT NULL
AND k~stabf = 'X'
AND ktknum NOT IN ( SELECT tktknum FROM vttk AS tk
INNER JOIN vttp AS tp ON tktknum = tptknum
INNER JOIN likp AS tl ON tpvbeln = tlvbeln
WHERE l~/bshm/le_nr_cust IS NULL )
AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
AND ( o~xegld = ' '
OR ( o~xegld = 'X' AND
( ( n~land1 = 'ES'
AND ( nregio = '51' OR nregio = '52'
OR nregio = '35' OR nregio = '38' ) )
OR n~land1 = 'ESC' ) )
OR ointca = 'AD' OR ointca = 'GI' ).
Does somebody know how to reduce the execution time ?.
Thanks.Hi,
Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
EX
data : begin of it_likp occurs 0,
vbeln like likp-vbeln,
/bshm/le_nr_cust like likp-/bshm/le_nr_cust,
vkorg like likp-vkorg,
werks like likp-werks,
kunnr likr likp-kunnr,
end of it_likp.
data : begin of it_kna1 occurs 0,
kunnr like...
regio....
land1...
end of it_kna1 occurs 0,
Select tknum stabf shtyp route vsart signi dtabf
from VTTP
into table gt_result
WHERE tknum IN s_tknum AND
tplst IN s_tplst AND
route IN s_route AND
erdat BETWEEN s_erdat-low AND s_erdat-high.
select vbeln /bshm/le_nr_cust
vkorg werks kunnr
from likp
into table it_likp
for all entries in gt_result
where vbeln = gt_result-vbeln.
select kunnr
regio
land1
from kna1
into it_kna1
for all entries in it_likp.
similarly for other tables.
Then loop at gt result and read corresponding table and populate entire record :
loop at gt_result.
read table it_likp where vbeln = gt_result-vbeln.
if sy-subrc eq 0.
move corresponding fields of it_likp into gt_result.
gt_result-kunnr = it_likp-kunnr.
modify gt_result.
endif.
read table it_kna1 where kunnr = gt_result-vbeln.
if sy-subrc eq 0.
gt_result-regio = it-kna1-regio.
gt_result-land1 = it-kna1-land1.
modify gt_result.
endif.
endloop. -
Oracle - select execution time
hi all,
when i executed an SQL - select (with joins and so on) - I have observed the following behaviour:
Query execution times are like: -
for 1000 records - 4 sec
5000 records - 10 sec
10000 records - 7 sec
25000 records - 16 sec
50000 records - 33 sec
I tested this behaviour with different sets of sqls on different sets of data. But in each of the cases, the behaviour is more or less the same.
Can any one explain - why Oracle takes more time to result 5000 records than that it takes to 10000.
Please note that this has not something to do with the SQLs as - i tested this with different sets of sql on different sets of data.
Can there be any Oracle`s internal reason - which can explain this behaviour?
regards
atThat is not normal behaviour. I've never come across anything like that that wasn't explainable by some environment factor (e.g. someone else doing a big sort).
I ran a couple of tests:
(1) to insert 5000 rows 0.1 seconds
to insert 10000 rows 0.18 seconds
and
(2) to select 5000 rows joined with 200K row table 0.19 seconds
to select 10000 rows joined with 200K row table 0.2 seconds
Although the second is close, I grant you!
Cheers, APC -
Same sqlID with different execution plan and Elapsed Time (s), Executions time
Hello All,
The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
17th Oct 18th Oct
221,808,602
21
2tc2d3u52rppt
213,170,100
72,495,618
9c8wqzz7kyf37
209,239,059
71,477,888
9c8wqzz7kyf37
139,331,777
1
7b0kzmf0pfpzn
144,813,295
1
0cqc3bxxd1yqy
102,045,818
1
8vp1ap3af0ma5
128,892,787
16,673,829
84cqfur5na6fg
89,485,065
1
5kk8nd3uzkw13
127,467,250
16,642,939
1uz87xssm312g
67,520,695
8,058,820
a9n705a9gfb71
104,490,582
12,443,376
a9n705a9gfb71
62,627,205
1
ctwjy8cs6vng2
101,677,382
15,147,771
3p8q3q0scmr2k
57,965,892
268,353
akp7vwtyfmuas
98,000,414
1
0ybdwg85v9v6m
57,519,802
53
1kn9bv63xvjtc
87,293,909
1
5kk8nd3uzkw13
52,690,398
0
9btkg0axsk114
77,786,274
74
1kn9bv63xvjtc
34,767,882
1,003
bdgma0tn8ajz9
Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
The other big difference is the average read time on two days
Tablespace IO Stats
17th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
947,766
59
4.24
4.86
185,084
11
2,887
6.42
UNDOTBS2
517,609
32
4.27
1.00
112,070
7
108
11.85
INDUS_MST_DATA01
288,994
18
8.63
8.38
52,541
3
23,490
7.45
INDUS_TRN_INDX01
223,581
14
11.50
2.03
59,882
4
533
4.26
TEMP
198,936
12
2.77
17.88
11,179
1
732
2.13
INDUS_LOG_DATA01
45,838
3
4.81
14.36
348
0
1
0.00
INDUS_TMP_DATA01
44,020
3
4.41
16.55
244
0
1,587
4.79
SYSAUX
19,373
1
19.81
1.05
14,489
1
0
0.00
INDUS_LOG_INDX01
17,559
1
4.75
1.96
2,837
0
2
0.00
SYSTEM
7,881
0
12.15
1.04
1,361
0
109
7.71
INDUS_TMP_INDX01
1,873
0
11.48
13.62
231
0
0
0.00
INDUS_MST_INDX01
256
0
13.09
1.04
194
0
2
10.00
UNDOTBS1
70
0
1.86
1.00
60
0
0
0.00
STG_DATA01
63
0
1.27
1.00
60
0
0
0.00
USERS
63
0
0.32
1.00
60
0
0
0.00
INDUS_LOB_DATA01
62
0
0.32
1.00
60
0
0
0.00
TS_AUDIT
62
0
0.48
1.00
60
0
0
0.00
18th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
980,283
91
1.40
4.74The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
You wrote with different execution plan, I think, you saw plans. It is very difficult, you get old plan.
I think Execution plans is not changed in different days, if you not added index or ...
What say ADDM report about this script?
As you know, It is normally, different Elapsed Time for same statement in different day.
It is depend your database workload.
It think you must use SQL Access and SQl Tuning advisor for this script.
You can get solution for slow running problem.
Regards
Mahir M. Quluzade
Maybe you are looking for
-
Waking my iMac when I want to connect my Mac Book Pro
When my iMac is awake and I click on Connect to Server, the connection is instant. However, when it's asleep, no go. Is there anyway to wake up the Imac from my Mac Book? thanks, bobbi
-
Locking Payment Terms field on OM Sales Order
Hi, Our users are entering sales order using OM responsibility. We have customers that have a payment term of COD. Within a sales order, when one of these customers is selected and COD is returned to the payment terms field, the user can still change
-
Bios lockout key - help needed
Forgot BIOS administrator and power up password. After 3 failed attempts got system disable key: 66144787 Appreciate help getting my notebook going again. This question was solved. View Solution.
-
How to Group left + group above
I am new to oracle report. Hope to get clear steps thx. I have 7 columns in my report. 3 columns using group-left and i want to use another 1 column for group above. How do i create group above for the column i want ?
-
Unable to open pdf from chrome or firefox on my macbook since the update.
I have a macbook pro. I use chrome or firefox in order to web browse. I have Adobe always running on the bottom of the screen. When I try to open pdf files from the web, it tells me that I need to end license agreement. I cannot figure out how. Howev