Why HTML report takes more time than the PDF one?
Hi,
I have created report in Reports 6i. When I run the report on the web with FORMAT = PDF it runs very fast and shows all the pages in 2 minutes. But when I run with
FORMAT = HTML it shows the first page in 2 minutes, after that it takes lot of time to show the remaining pages. If the total pages are more than 40, the browser just freezes
Can somebody give me the reason?
Is there any way to rectify this?
Thanks alot.
Ram.
Hi Senthil,
Iam running with the below parameters.
Format : HTML
Destination : Screen.
My default browser is IE. When I try to run using Netscape it showed only 1 page out of 34 pages.
When I run Format as PDF it is faster but font size is small when it opens up. Offcourse user can zoom it.
If I increase the report width from 11 to 14 the font size becomes very small when it open up in browser.
Is there any way that I can set up zoom when I run as PDF?
Thanks for your help.
Ram.
Similar Messages
-
Why import of change request in production takes more time than quality?
Hello All,
why import of change request in production takes more time than import into quality?Hi jahangeer,
I believe it takes same time to import a request in both quality and production as they will be in sync.
Even then if it takes more time in production that may depend on the change request.
Thanks
Pavan -
When i put my Mac for sleep it takes more time than normal (>20 secs). Sometimes, coming back from sleep the system is not responding (freeze).
Perform SMC and NVRAM resets:
http://support.apple.com/en-us/HT201295
http://support.apple.com/en-us/HT204063
The try a safe boot:
http://support.apple.com/en-us/HT201262
Any change?
Ciao. -
Count (*) for select stmt take more time than execute a that sql stmt
HI
count (*) for select stmt take more time than execute a that sql stmt
executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql command for faster execute .
but if i tried to find out total number of rows in that query it takes more time ..
almost 2.30 hrs still running to find count(col)
please help me to get count of row faster.
thanks in advance...797525 wrote:
HI
count (*) for select stmt take more time than execute a that sql stmt
executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql command for faster execute .
but if i tried to find out total number of rows in that query it takes more time ..
almost 2.30 hrs still running to find count(col)
please help me to get count of row faster.
thanks in advance...That may be because your client is displaying only the first few records when you are running the "SELECT *". But when you run "COUNT(*)", the whole records has to be counted.
As already mentined please read teh FAQ to post tuning questions. -
Delete DML statment takes more time than Update or Insert.
i want to know whether a delete statement takes more time than an update or insert DML command. Please help in solving the doubt.
Regards.i do not get good answers sometimes, so, i ask again.I think Alex answer to your post was quite complete. If you missed some information, continue the same post, instead of opening a new thread with the same subject and content.
You should be satistied with the answers you get, I also answered your question about global indexes, and I do think my answer was very complete. You may ask more if you want, but stop multiposting please. It is quite annoying.
Ok, have a nice day -
Why Garbage Collection take more time on JRockit?
My company use <br>
<b>BEA WebLogic 8.1.2<br>
JRockit version 1.4.2<br>
Window 2003 32bit<br>
RAM 4 Gig<br>
<br>
-Xms = 1300<br>
-Xmx = 1300<br></b>
and running ejb application.<br>
My problem is why JRockit take more time. How Can I solve this problem. Because my application will down again.
<br>
This is my infomation on JRockit :
<br>
Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep.
<br>
Total Garbage Collection Count: 10340
<br>
Last GC End: Wed May 10 13:55:37 ICT 2006
<br>
Last GC Start: Wed May 10 13:55:35 ICT 2006
<br>
<b>Total Garbage Collection Time: 2:53:13.1</b>
<br>
GC Handles Compaction: true
<br>
Concurrent: false
<br>
Generational: false
<br>
Incremental: false
<br>
Parallel: true
<br>Hi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs and IDOCS.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
SM21 - System log can also be helpful.
Also RSA7 will show LUWS which means more than one record.
Thanks,
JituK -
Zfs destroy command takes more time than usual
Hi,
When I run the destroy command it takes more than usual.
I have exported the lun form this zfs volume ealier.
Later I have removed the lun view and deleted the lun.After that when I run the below command it takes more time (more than 5mins and still running)
#zfs destroy storage/luIs there a way to quickly destroy the filesystem.
It looks it removing the allocated files.
capacity operations bandwidth
pool alloc free read write read write
storage0 107G 116T 3.32K 2.52K 3.48M 37.7M
storage0 107G 116T 840 551 1.80M 6.01M
storage0 106G 116T 273 0 586K 0
storage0 106G 116T 1.19K 0 2.61M 0
storage0 106G 116T 1.47K 0 3.20M -
hello experts,
i am using report builder(10g).i am working on a report.the report is ok but it takes more time to run.
.i already created indexing in columns but yet not its sufficient to improve the performance
so pls help me to improve the performance of my report.
Thanks And Regards]
Ravi964900 wrote:
hello experts,
i am using report builder(10g).i am working on a report.the report is ok but it takes more time to run.
.i already created indexing in columns but yet not its sufficient to improve the performance
so pls help me to improve the performance of my report.
Hi Ravi
NO one can help you without giving idea.
first idea you already done.
secondly test your query with explain plan. How to run ?
explain plan for <your query>;
select * from table(dbms_xplan.display);more read SQL and PL/SQL FAQ
Lastly you can not get very good help from this forum. Better close it and post @ PL/SQL
Hope this helps
Hamid
Mark correct/helpful to help others to get right answer(s).* -
Calc takes more time than previous
Hi All,
I have a problem with the calc as this calc take more time to execute please help!!!
I have included calc cache high in the .cfg file.
FIX (&As, &Af, &C,&RM, @RELATIVE("Pr",0), @RELATIVE("MS",0), @RELATIVE("Pt",0), @RELATIVE("Rn",0),@RELATIVE("Ll",0))
CLEARDATA "RI";
/* 22 Comment */
FIX("100")
"RI" = @ROUND ((("RDL")/("SBE"->"RDL"->"TMS"->"TP"->"TR"->"AF"->"Boom")),8);
ENDFIX
FIX("200")
"RI" = @ROUND ((("RDL")/("ODE"->"RDL"->"TMS"->"T_P"->"TR"->"AF"->"Boom")),8);
ENDFIX
Appriciate your help.
Regards,
Mink.Mink,
If the calculation script ,which you are using is the same which performed better before and data being processes is same ( i mean data might not have exceptionally grown more).Then, there must be other reasons like server side OS , processor or memory issues.Consult sys admin .Atleast you ll be sure that there is nothing wrong with systems.
To fine tune the calc , i think , you can minimise fix statements . But,thats not the current issue though
Sandeep Reddy Enti
HCC
http://analytiks.blogspot.com -
Why SQL2 took much more time than SQL1?
I run these 2 SQLs sequencely.
--- SQL1: It took 245 seconds.
create table PORTAL_DAYLOG_100118_bak
as
select * from PORTAL_DAYLOG_100118;
--- SQL2: It took 3105 seconds.
create table PORTAL_DAYLOG_100121_bak
as
select * from PORTAL_DAYLOG_100121;
It is really strange that SQL2 took almost 13 times than SQL1, with nearly same data amount and same data structure in the same tablespace.
Could anyone tell me the reason? or How could I find out why?
Here is more detail info. for my case,
--- Server:
[@wapbi.no.sohu.com ~]$ uname -a
Linux test 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
--- DB
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
--- Tablespace:
CREATE TABLESPACE PORTAL DATAFILE
'/data/oradata/wapbi/portal01.dbf' SIZE 19456M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED,
'/data/oradata/wapbi/portal02.dbf' SIZE 17408M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
--- Tables:
SQL> select table_name,num_rows,blocks,avg_row_len from dba_tables
2 where table_name in ('PORTAL_DAYLOG_100118','PORTAL_DAYLOG_100121');
TABLE_NAME NUM_ROWS BLOCKS AVG_ROW_LEN
PORTAL_DAYLOG_100118 20808536 269760 85
PORTAL_DAYLOG_100121 33747911 440512 86
CREATE TABLE PORTAL_DAYLOG_100118
IP VARCHAR2(20 BYTE),
NODEPATH VARCHAR2(50 BYTE),
PG VARCHAR2(20 BYTE),
PAGETYPE INTEGER,
CLK VARCHAR2(20 BYTE),
FR VARCHAR2(20 BYTE),
PHID INTEGER,
ANONYMOUSID VARCHAR2(50 BYTE),
USID VARCHAR2(50 BYTE),
PASSPORT VARCHAR2(200 BYTE),
M_TIME CHAR(4 BYTE) NOT NULL,
M_DATE CHAR(6 BYTE) NOT NULL,
LOGDATE DATE
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE TABLE PORTAL_DAYLOG_100121
IP VARCHAR2(20 BYTE),
NODEPATH VARCHAR2(50 BYTE),
PG VARCHAR2(20 BYTE),
PAGETYPE INTEGER,
CLK VARCHAR2(20 BYTE),
FR VARCHAR2(20 BYTE),
PHID INTEGER,
ANONYMOUSID VARCHAR2(50 BYTE),
USID VARCHAR2(50 BYTE),
PASSPORT VARCHAR2(200 BYTE),
M_TIME CHAR(4 BYTE) NOT NULL,
M_DATE CHAR(6 BYTE) NOT NULL,
LOGDATE DATE
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;Any comment will be really appeciated!!!
SatineHey Anurag,
Thank you for your help!
Here it is.
SQL1:
create table portal.PORTAL_DAYLOG_100118_TEST
as
select * from portal.PORTAL_DAYLOG_100118
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 374.69 519.05 264982 265815 274858 20808536
Fetch 0 0.00 0.00 0 0 0 0
total 2 374.69 519.05 264982 265815 274858 20808536
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Rows Row Source Operation
0 LOAD AS SELECT (cr=268138 pr=264982 pw=264413 time=0 us)
20808536 TABLE ACCESS FULL PORTAL_DAYLOG_100118 (cr=265175 pr=264981 pw=0 time=45792172 us cost=73478 size=1768725560 card=20808536)SQL2:
create table portal.PORTAL_DAYLOG_100121_TEST
as
select * from portal.PORTAL_DAYLOG_100121
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1465.72 1753.35 290959 291904 300738 22753695
Fetch 0 0.00 0.00 0 0 0 0
total 2 1465.72 1753.35 290959 291904 300738 22753695
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Rows Row Source Operation
0 LOAD AS SELECT (cr=295377 pr=290960 pw=289966 time=0 us)
22753695 TABLE ACCESS FULL PORTAL_DAYLOG_100121 (cr=291255 pr=290958 pw=0 time=56167952 us cost=80752 size=1956817770 card=22753695)Best wishes,
Satine -
Threaded program takes more time than running serially!
Hello All
Ive converted my program into a threaded application so as to improve speed. However i found that after converting the execution time is more than it was when the program was non threaded. Im not having any synchronised methods. Any idea what could be the reason ?
Thanx in advance.First, if you are doing I/O, then maybe that's what's taking the time and not the threads. One question that hasn't been asked about your problem:
How much is the time difference? If it takes like 10 seconds to run the one and 10 minutes to run the threaded version, then that's a big difference. But if it is like 10 seconds vs 11 seconds, I think you should reconsider if it matters so much.
One analogy that comes to mind about multiple threads vs. sequential code is this:
With sequentially run code, all the code segments are lined up in order and they all go thru the door one after the other. As one goes thru they all move up closer, thus they know who's going first.
With multi-threaded code, all the code segments sorta pile up around the door in a big crowd. Some push go thru one at a time while others let them (priority), while other times 2 go for the door at the same time and there might be a few moments of "oh, after you", "no, after you", "oh no, I insist, after you" before one goes thru. So that could introduce some delay. -
IPhoto library takes more space than the actual data that is in it.
Why is iPhoto Library taking almost 10 GB more than the real data i have in it ?
I have the original photos in a folder and it takes 10.7 GB of space in my hard drive, but when I get info from the iPhoto library its taking 17.9 GB of space
Do you have two copies of the photos on the HD? Why?
I have read that every single move I do in iPhoto it makes a copy of the file for backup no matter how big the photo is, even if you just rotate the picture,
When you edit a pic - including Rotation - iPhoto makes a copy of the Original and carries out the edit on that copy and saves it. However, if you then make another edit it does not make a second copy, it simply destroys the first copy and applies the total edits to a new copy. So there is never more than your Original and a single Modified version of your pics.
that’s just taking so much space I need is there any way to make iPhoto stop doing that?
No. If you don't want that, simply don't use iPhoto. IPhoto is a Digital Asset Manager and it coheres to the best practices of DAM by treating your original file like a film photographer protects his negative.
Also when look in the finder and search for every file I have used during the day, if I had just open iPhoto once it will apper a lot of this files called like Album.999001.ipmeta that are from iphot. If I haven used it just opened it why is it showing that ?
This is just the way that iPhoto works. These ipmeta files are tiny 4kb files. They take up the smallest amount of disk space possible.
Regards
TD -
Row Insert in Timesten takes more time than Oracle
Hi,
I have a Timesten IMDB (11.2.1.8.0 (64 bit Linux/x86_64) with an underlying Oracle Database 11Gr2.
Sys.odbc.ini entry is :
[DSN_NAME]
Driver=/application/TimesTen/matrix/lib/libtten.so
DataStore=/application/TimesTen/DSN_NAME_datastore/DSN_NAME_DS_DIR
LogDir=/logs_timeten/DSN_NAME_logdir
PermSize=8000
TempSize=250
PLSQL=1
DatabaseCharacterSet=WE8MSWIN1252
OracleNetServiceName=DBNAME
Connections=500
PassThrough=0
SQLQueryTimeout=250
LogBufMB=512
LogFileSize=512
LogPurge=1
When I try to insert a simple row in a table in an asyc cache group in Timesten it takes 3 ms (it has 6 indexes on it). On removing 4 indexes the performance improves to 1 ms. However inserting the same row on Oracle (with 6 indexes) takes 1.2 ms.
How can we improve the insert row performance in Timesten ? Kindly assist.
Regards,
Karan
PS: During the test run, we monitored deadlocks and log buffer waits with the following query and both values never changed from zero
select PERM_ALLOCATED_SIZE,PERM_IN_USE_SIZE,TEMP_ALLOCATED_SIZE,TEMP_IN_USE_SIZE,DEADLOCKS,LOG_FS_READS,LOG_FS_WRITES,LOG_BUFFER_WAITS from sys.monitor;
Edited by: 853100 on Nov 2, 2012 4:19 AMThis is not very efficient as the statement will require likely need to be parsed for each INSERT. Even a soft parse is very expensive compared to the cost of the actual INSERT.
Can you try changing your code to something like the following just to evaluate the difference in performance. The object is to prepare the INSERT just once, outside of the INSERT loop and then execute the prepared INSERT many times passing the required input parameters. I'm not a Pro*C expert but an outline of the code looks something like this:
char * ins1 = " INSERT INTO ORDERS(
ORD_ORDER_NO ,
ORD_SERIAL_NO ,
ORD_SEM_SMST_SECURITY_ID,
ORD_BTM_EMM_MKT_TYPE ,
ORD_BTM_BOOK_TYPE ,
ORD_EXCH_ID ,
ORD_EPM_EM_ENTITY_ID ,
ORD_EXCH_ORDER_NO ,
ORD_CLIENT_ID ,
ORD_BUY_SELL_IND ,
ORD_TRANS_CODE ,
ORD_STATUS ,
ORD_ENTRY_DATE ,
ORD_ORDER_TIME ,
ORD_QTY_ORIGINAL ,
ORD_QTY_REMAINING ,
ORD_QTY_DISC ,
ORD_QTY_DISC_REMAINING ,
ORD_QTY_FILLED_TODAY ,
ORD_ORDER_PRICE ,
ORD_TRIGGER_PRICE ,
ORD_DISC_QTY_FLG ,
ORD_GTC_FLG ,
ORD_DAY_FLG ,
ORD_IOC_FLG ,
ORD_MIN_FILL_FLG ,
ORD_MKT_FLG ,
ORD_STOP_LOSS_FLG ,
ORD_AON_FLG ,
ORD_GOOD_TILL_DAYS ,
ORD_GOOD_TILL_DATE ,
ORD_AUCTION_NO ,
ORD_ACC_CODE ,
ORD_UM_USER_ID ,
ORD_MIN_FILL_QTY ,
ORD_SETTLEMENT_DAYS ,
ORD_COMPETITOR_PERIOD ,
ORD_SOLICITOR_PERIOD ,
ORD_PRO_CLIENT ,
ORD_PARTICIPANT_TYPE ,
ORD_PARTICIPANT_CODE ,
ORD_COUNTER_BROKER_CODE ,
ORD_CUSTODIAN_CODE ,
ORD_SETTLER ,
ORD_REMARKS ,
ORD_BSE_DELV_FLAG ,
ORD_BSE_NOTICE_NUM ,
ORD_ERROR_CODE ,
ORD_EXT_CLIENT_ID ,
ORD_SOURCE_FLG ,
ORD_BUY_BACK_FLG ,
ORD_RESERVE_FLG ,
ORD_BSE_REMARK ,
ORD_CARRY_FORWARD_FLAG ,
ORD_ORDER_OFFON ,
ORD_D2C1_FLAG ,
ORD_FI_RETAIL_FLG ,
ORD_OIB_INT_REF_ID ,
ORD_BOB_BASKET_ORD_NO ,
ORD_PRODUCT_ID ,
ORD_OIB_EXEC_REPORT_ID ,
ORD_BANK_DP_TXN_ID ,
ORD_USERINFO_PROG ,
ORD_BANK_CODE ,
ORD_BANK_ACC_NUM ,
ORD_DP_CODE ,
ORD_DP_ACC_NUM ,
ORD_SESSION_ORDER_TYPE ,
ORD_ORDER_CC_SEQ ,
ORD_RMS_DAEMON_STATUS ,
ORD_GROUP_ID ,
ORD_REASON_CODE ,
ORD_REASON_DESCRIPTION ,
ORD_SERIES_IND ,
ORD_BOB_BASKET_TYPE ,
ORD_ORIGINAL_TIME ,
ORD_TRD_EXCH_TRADE_NO,
ORD_MKT_PROT ,
ORD_SETTLEMENT_TYPE ,
ORD_SUB_CLIENT,
ORD_ALGO_OI_NUM,
ORD_FROM_ALGO_CLORDID,
ORD_FROM_ALGO_ORG_CLORDID
VALUES(
:lvar_ord_order_no ,
:lvar_ord_serial_no ,
ltrim(rtrim(:lvar_ord_sem_smst_security_id)),
ltrim(rtrim(:lvar_ord_btm_emm_mkt_type)),
ltrim(rtrim(:lvar_ord_btm_book_type)),
ltrim(rtrim(:lvar_ord_exch_id)) ,
decode(:lD2C1Flag,'N',ltrim(rtrim(:lvar_ord_epm_em_entity_id)),ltrim(rtrim(:sD2C1ControllerId))) ,
:insertExchOrderNo,
ltrim(rtrim(:lvar_ord_client_id)) ,
ltrim(rtrim(:lvar_ord_buy_sell_ind)),
:lvar_ord_trans_code,
:cTransitStatus ,
sysdate,
sysdate,
:lvar_ord_qty_original,
decode(:lvar_ord_qty_remaining ,-1,to_number(null),:lvar_ord_qty_remaining) ,
decode(:lvar_ord_qty_disc ,-1,to_number(null),:lvar_ord_qty_disc),
decode(:lvar_ord_qty_disc_remaining,-1,to_number(null),:lvar_ord_qty_disc_remaining),
:lvar_ord_qty_filled_today ,
:lvar_ord_order_price,
decode(:lvar_ord_trigger_price ,-1,to_number(null),:lvar_ord_trigger_price) ,
decode(:lvar_ord_disc_qty_flg ,-1,null,:lvar_ord_disc_qty_flg) ,
decode(:lvar_ord_gtc_flg ,-1,null,:lvar_ord_gtc_flg) ,
decode(:lvar_ord_day_flg ,-1,null,:lvar_ord_day_flg) ,
decode(:lvar_ord_ioc_flg ,-1,null,:lvar_ord_ioc_flg) ,
decode(:lvar_ord_min_fill_flg ,-1,null,:lvar_ord_min_fill_flg) ,
decode(:lvar_ord_mkt_flg ,-1,null,:lvar_ord_mkt_flg) ,
decode(:lvar_ord_stop_loss_flg ,-1,null,:lvar_ord_stop_loss_flg) ,
decode(:lvar_ord_aon_flg ,-1,null,:lvar_ord_aon_flg) ,
decode(:lvar_ord_good_till_days ,-1,to_number(null),:lvar_ord_good_till_days),
to_date(ltrim(rtrim(:lvar_ord_good_till_date)) ,'dd-mm-yyyy'),
:lvar_ord_auction_no,
ltrim(rtrim(:lvar_ord_acc_code)),
ltrim(rtrim(:lv_UserIdOrLogPktId)),
decode(:lvar_ord_min_fill_qty,-1,to_number(null),:lvar_ord_min_fill_qty),
:lvar_ord_settlement_days,
:lvar_ord_competitor_period,
:lvar_ord_solicitor_period,
:lvar_ord_pro_client ,
ltrim(rtrim(:lvar_ord_participant_type)),
ltrim(rtrim(:lvar_ord_participant_code)),
ltrim(rtrim(:lvar_ord_counter_broker_code)),
trim(:lvar_ord_custodian_code) ,
ltrim(rtrim(:lvar_ord_settler)),
ltrim(rtrim(:lvar_ord_remarks)),
ltrim(rtrim(:lvar_ord_bse_delv_flag)) ,
ltrim(rtrim(:lvar_ord_bse_notice_num)) ,
:lvar_ord_error_code ,
trim(:lvar_ord_ext_client_id) ,
ltrim(rtrim(:lvar_ord_source_flg)),
ltrim(rtrim(:lvar_ord_buyback_flg)),
:lvar_ord_reserve_flag ,
trim(:lvar_ord_bse_remark) ,
ltrim(rtrim(:lvar_ord_carryfwd_flg)),
:cOnStatus,
:lD2C1Flag,
:lSendToRemoteUser,
:lInternalRefId,
:lvar_bob_basket_ord_no,
ltrim(rtrim(:lvar_ord_product_id)),
trim(:lvar_ord_oib_exec_report_id) ,
:lvar_BankDpTxnId ,
ltrim(rtrim(:lEquBseUserCode )),
ltrim(rtrim(:lvar_BankCode)) ,
ltrim(rtrim(:lvar_BankAccNo)),
ltrim(rtrim(:lvar_DPCode)),
ltrim(rtrim(:lvar_DPAccNo)) ,
ltrim(rtrim(:lvar_OrderSessionType)) ,
:lvar_ord_order_cc_seq,
:lvar_ord_rms_daemon_status ,
:lvarGrpId,
:lvar_ord_reason_code ,
trim(:lvar_ord_reason_description) ,
:lSecSeriesInd,
ltrim(rtrim(:lBasketType)),
sysdate,
(-1 * :lvar_ord_serial_no),
:MktProt ,
:lvar_ord_sett_type,
ltrim(rtrim(:lvar_ca_cli_type)) ,
:ComplianceID,
ltrim(rtrim(:lvar_ClOrd)),
ltrim(rtrim(:lvar_OrgClOrd))
EXEC SQL AT :db_conn PREPARE i1 FROM :ins1;
logTimestamp("BEFORE inserting in orders table");
for (i=0; i<NUM_INSERTS; i++)
if ( strncmp(lvar_ord_exch_id.arr,"NSE",3) ==0 )
if(tmpExchOrderNo == -1)
insertExchOrderNo = NULL;
else
insertExchOrderNo = tmpExchOrderNo;
else if ( strncmp(lvar_ord_exch_id.arr,"BSE",3) ==0 )
if(tmpExchOrderNo == -1)
insertExchOrderNo = NULL;
else
insertExchOrderNo = tmpExchOrderNo;
lvar_ord_acc_code.len = strlen (lvar_ord_acc_code.arr);
sprintf (lv_UserIdOrLogPktId.arr,"%d",UserIdOrLogPktId );
lv_UserIdOrLogPktId.len = strlen (lv_UserIdOrLogPktId.arr) ;
lEquBseUserCode.len = fTrim(lEquBseUserCode.arr,16);
lvar_ord_buyback_flg.len = fTrim(lvar_ord_buyback_flg.arr,1);
lvar_ord_exch_id.len = fTrim(lvar_ord_exch_id.arr,3);
EXEC SQL AT :db_conn EXECUTE i1 USING
:lvar_ord_order_no ,
:lvar_ord_serial_no ,
:lvar_ord_sem_smst_security_id,
:lvar_ord_btm_emm_mkt_type,
etc. ;
logTimestamp("AFTER inserting in orders table");
/* Divide reported time by NUM_INSERTS to get average time for one insert */
Chris -
Is the 80GB Video Ipod more fragile than the 30GB one?
I plan to get an IPOD video, and my concern here is whether the 80GB one might have a more fragile HD since it is denser
ThzI doubt there is any difference. The future terrabyte model might be a different story.
-
In my TDS Certificate Report takes more time in fetching the data from bseg
only i am checking mblnr and company code.
I think you can do this, but not the way you are doing it now. You have to use most of the key when going agains BSEG. I'll look around a bit and get back to you.
Yes - you can get the purchase order and sales order line items for the material. Then go to the history tables to get the material document numbers. Finally use AWTYPE and AWKEY to get the document numbers from BKPF.
Rob
Message was edited by:
Rob Burbank
Maybe you are looking for
-
Firefox 5 stops responding after I use it for awhile. Once I close it through the task manager, it refuses to reopen and I am forced to restart my computer to get Firefox to work again. Note: This problem also occurred in Firefox 3 before I updated.
-
How do I transfer contacts/music from Iphone 3GS to Iphone 5?
I have I Tunes, but the back up only transferred my contacts from Outlook.
-
How to change parsing schema for REST
Hi all, I'm trying to test the new REST Webservice feature, but this leads to an error: Using Apex 4.2.1.00.08, Listener 2.0.1.64.14.25 I set up a simple WebService (method: GET, format: JSON) which querys a table. When I try to test my webservice (u
-
Jdveloper version: 11.1.1.2.0 Dimensions extension version: 11.1.1.2.36.55.36 Serena Dimensions version: CM 10.1.3 Client Specs: 32 bit Windows XP (up to date), 4 gigs memory, 200 gigs free hard disk Java Version: jdk1.6.0_17 What I did: First, I cle
-
How to add line item to sales order item table?
Kindly help me the with the below requirement . Add line item to sales order(va01) dynamically on click of button. The button is also custom created push button. In the above requirement I have added the pushbutton to va01 tcode. But when I click on