Too much time taken for Adding/Updating records in netscape LDAP
Hi,
I am a newbie in ldap and have a text file with 80,000 users information in it, which i need to add into LDAP.
I am using netscape LDAP 5.1 as my directory server.
The time that is being taken to add records into LDAP, is like 3 minutes for each add.
This is slowing down my batch job terribly, and causing issues with performance.
Does anybody have a idea or knowledge whether this is correct or things could be speeded up, if yes a few pointers and tips would be highly appreciated.
Thanks in advance
piyush_ast.
I don't really have any suggestions for the speed, other then to make the code or network connnectivity more efficient if possible. Maybe you should read the Netscape documentation. However, perhaps LDAP isn't the best solution for you. LDAP is optimized for a read-often, write-rarely, type of setup. If you're constantly updating such a large volume of records, maybe a DB would be the best choice.
Similar Messages
-
Too much time taken to go to sleep
hey there,
i've got problem here which my macbook takes nearly one minute to go to sleep.
it really wasted my time to for it to go to sleep.
your reply is really appreciate.
thank you,
Arif
[new to mac world]I don't really have any suggestions for the speed, other then to make the code or network connnectivity more efficient if possible. Maybe you should read the Netscape documentation. However, perhaps LDAP isn't the best solution for you. LDAP is optimized for a read-often, write-rarely, type of setup. If you're constantly updating such a large volume of records, maybe a DB would be the best choice.
-
Data extracting to BW from R3 taking too much time
Hi,
We have one delta data load to ODS from R3 this is taking 4-5 hours .this job runs in r3 itself for 4-5 hours even for 30-40 records.and after this ODS data updated to cube so but since in ODS itself takes too much time so delta brings 0 records in cube hence we have to update manually.
Also as now job is running for load to ODS so can't we check records for delta in RSA3 Its giving me error saying "error occurs during extraction ".
can u please guide how we can make this loading faster if any index needs to be build how to proceed on that front
Thanks
NileshrAHUL,
I tried with R its giving me dump with message "Resul of customer enhancemnet 19571 records"
Erro details are -
Short text
Function module " " not found.
What happened?
The function module " " is called,
but cannot be found in the library.
Error in the ABAP Application Program
The current ABAP program "SAPLRSA3" had to be terminated because
come across a statement that unfortunately cannot be executed.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time. -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards -
Large records retrieve from network took too much time.How can I improve it
HI All
I have Oracle server 10g, and I have tried to fetch around 200 thousand (2 lack ) records. I have used Servlet that is deploy into Jboss 4.0.
And this records come from network.
I have used simple rs.next_ method but it took too much time. I have got the only 30 records with in 1 sec. So if I want all these 2 lacks records system take around more than 40 min. And my requirement is that it has to be retrieve within 40 min.
Is there any another way around this problem? Is there any way that at one call Result set get 1000 records at one time?
As I read somewhere that “ If we use a normal ResultSet data isn't retrieved until you do the next call. The ResultSet isn't a memory table, it's a cursor into the database which loads the next row on request (though the drivers are at liberty to anticipate the request). “
So if we can pass the a request to around 1000 records at one call then maybe we can reduce time.
Has anyone idea How to improve this problem?
Regards,
Shailendra SoniThat true...
I have solved my problem invokeing setFetchSize on ResultSet object.
like ResultSet.setFetchSize(1000).
But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
Actually I had read a one nice article on net
[http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
They gave two solutions i,e
Fetch small amount of data iteratively instead of fetching whole data at once
Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
1. Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
// But i don't understand How can I do it in java.
2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
// But i don't understand How can I do it in java.
If you know any one of these solutions then can you please give me examples to do it.
Thanks in Advance,
Shailendra -
Updating the ipod with itunes takes too much time
i have like 3000 songs in my itunes library ,but when i change some track's names o change anything of my songs in a diferent way like putting their names in capital letters or changing their artworks..etc ,itunes takes too much time updating the changes to my ipod................. why?
ipodvideo Windows XP
Windows XPGenerally the iPod update is pretty quick unless you are making many hundreds of changes at a time. It could be the USB port is slow, try it in another port, preferably on the back of the PC, some computers have underpowered ports on the front panel. Also the recommended system requirements are for USB 2, it will work from a USB 1 port but much slower if that's the type of port you have.
-
OWB 10g - The time taken for data load is too high
I am loading data on the test datawarehouse server. The time taken for loading data is very high. The size of data is around 7 GB (size of flat files on the OS).
The time it takes to load the same amount of data on the production server from the staging area to the presentation area(datawarehouse) is close to 8 hours maximum.
But, in the test environment the time taken to execute one mapping (containing 300,000 records)is itself 8 hours.
The version of Oracle database on both the test and production servers is the same i.e., Oracle 9i.
The configuration of the production server is : 4 Pentium III processors (2.7 GHz each), 2 GB RAM, Windows 2000 Advanced Server, 8 kilobyte primary memory cache, 512 kilobyte secondary memory cache, 440.05 Gigabytes Usable Hard Drive Capacity, 73.06 Gigabytes Hard Drive Free Space
The configuration of the test server is : 4 Pentium III processors (2.4 GHz each), 1 GB RAM, Windows 2000 Advanced Server, 8 kilobyte primary memory cache,
512 kilobyte secondary memory cache, 144.96 Gigabytes Usable Hard Drive Capacity, 5.22 Gigabytes Hard Drive Free Space.
Can you guys please help me to detect the the possible causes of such erratic behaviour of the OWB 10g Tool.
Thanks & Best Regards,
Harshad Borgaonkar
PwCHello Harshad,
2 GB of RAM doesn't seem to be very much to me. I guess your bottleneck is I/O. You've got to investigate this (keep an eye on long running processes). You didn't say very much about your target database design. Do you have a lot of indexes on the target tables and if so have you tried to drop them before loading? Do your OWB mappings require a lot of lookups (then apropriate indexes on the lookup table are very useful)? Do you use external tables? Are you talking about loading dimension or fact tables or both? You've got to supply some more information so that we can help you better.
Regards,
Jörg -
Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.
stop the download if it's stalled, and restart your download.
-
bought i phone 6 64 GB just 3 days before performance is very bad.Couldn't even open app store it takes too much time for opening a web page.
You may need to reset your network settings, making sure the network you're accessing is stable: tap Settings > General > Reset > Reset Network Settings.
If your iPhone still can't connect to App Store, tap Settings > iTunes & App Store > tap the Apple ID and sign out > later, sign in with your Apple ID. -
I have updated my Iphone 3 but unable to start it. It takes too much time on Authentication and than message appears that Authentication failed
I don't know either its jailbroken or hacked otherwise.
It was working properly before I have updated it through Itunes to update the OS. After the updation, this message occurs
Authentication failed, please try after few minutes
Please help -
Taking too much time for saving PO with line item more than 600
HI
We are trying to save PO with line items more than 600, but it is taking too much time to save. It is taking more than 1 hour to save the PO.
Kindly let me know is there any restriction for no. of line items in the PO. Pls guide
regards
SanjayHi,
I suggest you to do a trace (tcode ST05) to identify the bottleneck.
You can know some reasons in Note 205005 - Performance composite note: Purchase order.
I hope this helps you
Regards
Eduardo -
During the Unicode conversion , Cluster table export taken too much time ap
Dear All
during the Unicode conversion , Cluster table export taken too much time approximately 24 hours of 6 tables , could you please advise , how can we reduse the time
thanks
JainnedraHello,
Use latest R3load from market place.
also refer note
Note 1019362 - Very long run times during SPUMG scans
Regards,
Nitin Salunkhe -
Delta Sync taking too much time on refreshing of tables
Hi,
I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
It is required that if we update the stock quantity then the stock should be updated instantaneously.
To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
Please could anyone suggest something so that only those table get refreshed upon which the action is carried out.
For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
Not on any other scenario like the changing status from accept to driving or any thing other than stocks.
Thanks,
Star
Tags edited by: Michael ApplebyHi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
Taking too much time using BufferedWriter to write to a file
Hi,
I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
Thanks in advance.
public String extractItems() throws InternalServerException{
try{
String extractFileName = getExtractFileName();
FileWriter fileWriter = new FileWriter(extractFileName);
BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
System.out.println("Before -1");
CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
System.out.println("After -1");
PrintWriter out = new PrintWriter(bufferWrt);
System.out.println("Before -2");
TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
System.out.println("After -2");
XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
Enumeration allitems = itemSet.allItems();
System.out.println("the batch size : " +itemSet.getBatchSize());
XDForm frm = itemSet.getXDForm();
XDFormProperty[] props = frm.getXDFormProperties();
System.out.println("Before -3");
bufferWrt.newLine();
long startTime ,startTime1 ,startTime2 ,startTime3;
startTime = System.currentTimeMillis();
System.out.println("time here is--before-while : " +startTime);
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --new: " +startTime1);
bufferWrt.write(aRow.toCharArray());
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
startTime3 = System.currentTimeMillis();
System.out.println("time here is--after-while : " +startTime3);
out.close();//added by rosmon to check extra time taken for extraction//
bufferWrt.close();
fileWriter.close();
System.out.println("After -3");
return extractFileName;
catch(Exception e){
e.printStackTrace();
throw new InternalServerException(e.getMessage());Hi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
PRS600 ereader taking too much time to load all the books
Whenever I open My PRS600 from complete shutdown it takes too much time to load all the files which are present in Memory card. Is anyone else facing same issue?
Hello,
Recently, I have been having some serious issues when it comes to my Sony PRS-600. I am running on a Windows 7 64-bit, and an updated Reader Library 3.3
The issue comes when transferring books to the e-reader from the library, and from the e-reader to a collection. The problem is that the software becomes intolerably slow while it's processing the command. The Reader Library software window grays out, displays (Not Working) and if clicked on it, shades to a white color and displays either "cancel operation" or "wait until program responds". If I do close the operation, it appears as if the e-reader doesn't follow the operation and still displays "Do not disconnect". Since I do not see any other way to disconnect (other than the eject option), I remove the USB plug which causes a bit more issues with the reader (such as removing all of my collections, for example!).
But anyway, that's not the main issue here. The main issue is that the book transferring is really slow. I need to wait a couple of minutes (or even more) just for the software to process. Moving just 1 MB of data requires so much time as if it's 1 GB. Sometimes it's random and does it fast, and sometimes the application is better not to be dealt with at all while it's processing the command. If I would inspect My Computer, the simple loading of the e-reader storage icons and information would make the Windows Explorer to "crash" (e.g, close all windows and then reopen them). It just happens that in all randomness even the creation of a collection makes the software slow.
So to recap: the reader software is slow when adding and moving books.
I hope someone will help me resolve this annoyance.
Thank you,
KQ
Maybe you are looking for
-
My phone won't finish downloading updates to iOS7 without being plugged into my computer and completing via iTunes. It's been this way with 7.0.2, 7.0.3 and now 7.0.4 and doesn't seem to matter whether it's on cellular data or WiFi. Does anyone know
-
Databases, schemas, tables ??
Hello, i am new to Oracle, I have been useing SQL Server for some time, and my company has switched. In SQL Server I could create many databases, so I would create a seperate database for each project to keep them seperate, then I would create one ca
-
Question related to the use of multiple EVDRE's in one sheet
I am trying to create an input schedule/report (input schedule that has history attached to it for users to reference) which would allow users to input their sales forecasts directly into BPC. However, our challenge is that we have roughly 5,000+ ite
-
Quicktime 10 won't play FCE exported .mov files?
I have edited several videos using Final Cut Express and exported them as .mov files to my desktop. Now, because I have a small internal hard drive (128GB SSD), I copy all of my raw footage to an external hard drive and then delete the files from my
-
Why socket policy file has 20 KB limitation?
When I provide policy file over 20 KB on my socker server. Flash client will get security error #2048. And I observe the "policyfiles.txt". There is log like this: "錯誤: 正在忽略位於 xmlsocket://192.168.5.13:8843 的通訊端原則檔案,因為該檔案太大.通訊端檔案不得超過 20 KB." It mean p