Execution time increase

hai  guys,
Im currently developing a code that run 24 hours 7 days a week. My problem is the execution time increases as days pass by. As im tracing down the execution time ,the 1st day i start the to run the program execution time is 0.04 second but after 3 day it increases to 4.5 seconds .
then i indually tracked down the subvi that causing problem and find out that the problem is on writing to ini file.
The subvi function is to REPLACE the data (20 data )in a file every 10 second*not accumulating data just replace the old data  . I open file replace data and close file every itteration and as day pass the execution time for this subvi increases.
Can any one help me to get a solution for the problem as time is a very important factor in my programming. the execution time is limited for less that 10 second.
thanks
regards
kokula 

As David said, post the code.
Is there any reason why your are loggin to an .ini file?  Does that mean you are using the LabVIEW config file functions?
I don't know if deallocating memory would help, or is even possible, without seeing your code.  You either have a problem with resources continuously being created, but not being closed out.  Or you have an ever growing array.  This takes more memory over time, and slows down the code because LabVIEW has to move memory around to account for the ever larger array.
I bet if you let your program run long enough, it would eventually crash due to running out of memory.
By the way 47MB to 350 MB of memory consumption means the same thing as RAM usage increasing.

Similar Messages

  • Why the execution time increases with a while loop, but not with "Run continuously" ?

    Hi all,
    I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
    I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
    I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
    After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
    Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
    Thanks a lot for your help!
    Solved!
    Go to Solution.
    Attachments:
    TimeTesting.zip ‏70 KB
    Graph.PNG ‏20 KB

    jul7290 wrote:
    Thank you very much for your help! I added the "Clear task" vi and now it works properly.
    If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • CVI XML Functions Execution Times Increase When Looped

    I have written multiple functions using CVI that read XML files. I have confirmed in the Resource Tracking utility that i have cleaned up all of my lists, elements, documents, etc. I have found that when I loop any of the functions I have created, the execution times increase. The increase is small but it is noticable and does effect my execution.
    Are there any other sources of memory that I need to deallocate? It seems that there is a memory leak somewhere but I am unable to see where this increase is located.
    I am currently running LabWIndows/CVI 2009 on Windows 2008 Server. I have looped my functions using TestStand 4.2.1. Any help would be appreciated!
    Thanks in advance,
    Kyle
    Solved!
    Go to Solution.

    HI Daniel,
    Thanks for the quick response.
    It is indeed slow down in execution speed when we loop. When looped, the XML reader is overwriting variables, not adding to an array. Our application is structured differently than my test case. We run a CVI function from TestStand that contains a series of commands, which contains the XML reading. The XML looping is really done in CVI. I used TestStand in my test case just to get execution times. Our psuedocode for the CVI function is as followed:
    For loop (looping over values, like amplitude or frequency)
    Reading the XML
    Applying the data from the XML to set up some instrument(s)
    Do something...
    End loop
    I can confirm that the instrument set up is not the cause of the slow down. We have written the same XML reading in C# and applied the values to the instrument setup and do not experience the slow down.
    I tested with On-The-Fly Reporting enabled and the execution time continued to slow down.
    I hope that answers all of your questions!
    Thanks,
    Kyle

  • ETL execution time want to reduce

    Hi Everybody,
    I am working on owb 10g with R2.
    Environment is win 2003 server 64bit itanium server,
    oracle 10 database in netap server mapped as I drive on 186 server where owb installed.
    source files : oracle's staging schema
    target : oracle target schema
    Problem :
    The problem is before 1 month our ETL process was taking 2 hrs to complete .
    now a days 5 hrs...i dont know why.
    any body suggest what I need to check in owb.
    for optimization.

    Thanks for reply sir,
    as you suggest a query for checking the execution time in desc order, I am sending you little bit o/p for today date execution.
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_CONTRACT_SUMMARY_M2__V_1"
    20-NOV-07 20-NOV-07 1056 0 0
    346150 0 346052
    0 0 0
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_POLICY_SUSPENCE_V_1"
    20-NOV-07 20-NOV-07 884 0 0
    246576 0 0
    0 0 246576
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_ACTIVITY_AMT_DETAIL_M3_V_1"
    20-NOV-07 20-NOV-07 615 0 0
    13927 13927 0
    0 0 0
    ==================================
    I think Elapse time depend on No of rec selected and inserted merge wahtever be...if rec are reduce then time also reduce but compare to before (when ETL got finished within 2 hrs), so we got more than 100 sec's diffrence b/w that time and now .
    source tables analyzed daily before mapping execution started. and target tables analyzed at evening time .
    As a remeber from last that day nothing any major changes made in ETL mappings. one day there was a problem arise that source_loc for another Process Wonders ( As i told before there are total 3 main Process runs Sun , Wonders and Life_asia,in which sun and wonders are scheduled) so we have correct that loc and deployed the all mappings as requier msg from control center.
    then mappings runs fine but Execution time increased by 1 hrs more(5 hrs+) than before (3-4hr).
    and normal time was
    2 hrs for LifeAsia.
    30 mnt for wonders
    15 mnts for Sun.
    Can you Suggest me what i can do for temp/permanent solution of this problem.
    according to our System config...
    1 tb hdd.in which 2-300 gb free
    4 gb ram
    64 bit windows os
    Used temp tablespace 99 % with auto-extendable
    Used target table space 93-95%....
    data load incrementaly daily.
    load window was 5am to 8 am which is now a days going upto 12 .30 pm
    after which matview going to refresh.
    after which reports and cubes refresh.
    So all process going to delay and this is live process .
    suggest me if any info u want .
    abt hardware config , we need to increase some...? like ram ....memory..etc.
    @wait for reply...

  • What to do if data grows heavily.How to decrease the execution time

    Procedure execution time increases from 1 minute to 40 minutes in the past three years. I think this is due to increase in size of tables. We are maintaining the database from last 6years. Nearly 25,000 records are adding per month in a table. Now each contains above 12Lakh records. I already used Query Optimization techniques. So please suggest me what to do.

    So, does this process need to access the whole table or is it just interested in the last months records?
    I already used Query Optimization techniques.meaning what precisely???
    So please suggest me what to do.Without knowing a whole bunch more about your situation we cannot give you a solution. You have one slow process. So what? For instance, let's presume this is a batch process that runs once a month. Does a forty minute elapsed time matter compare to processes that are used by all your users many times every day? Are you prepared to risk the performance of those OLTP functions to improve the runtime of that batch process?
    Cheers, APC

  • Execution time continues to increase while vi is running

    I started with a vi which reads data from a WII remote (wiimote).  The code works.  I want to convert the acceleration data to velocity and displacement by numerical integration.  Accurate conversion requires a measure of execution time.  The vi correctly returns 9-11 ms when it first starts, however the measured intervals continue to increase in lenght as the vi runs.  The measured interval goes from 10 ms to 80 ms after about 1 hour.  I used a tick counter to measure the loop execution time.  Any suggestions?
    Attachments:
    Simple Event Callback Roll Pitch. timed eventvi.vi ‏19 KB

    Can you do some profiling to see which subVI could be the problem?
    If you look at the task manager, how is the momery use over time?
    Your timing systems seems flawed, because the execution of the first tick cound is poorly defined with respect to the rest of the code.
    Some typical things to look out for.
    growing arrays in shift registers
    constantly opening references without ever closing them.
    LabVIEW Champion . Do more with less code and in less time .

  • Oracle NoSQL YCSB - continuously increasing execution time

    Greetings,
    currently I am testing several nosql databases using YCSB. I am new to that type of databases, but I have already tested few of them. I am using VM with 2GB RAM and hosted on Win 7. Even though it is not recommended, since I am working in now capacity environment, I am using KVlite. But my problem is confusing and I can not find the reason. So, I have successfully loaded data and tested Oracle NoSQL using different workloads. However, with each execution, I get higher execution time. For example, if at 1st execution I get 20 seconds, if I shut database down and next day execute same workload again, I get 35 second execution time and so on.
    Do you have any idea of what may be causing that? Like I said, I have been researching some nosql databases but I have never had that strange results.
    Regards.

    To add to Robert's comment, the NoSQL DB documentation on KVLite states the following:
         KVLite is a simplified version of Oracle NoSQL Database. It provides a single-node store
         that is not replicated. It runs in a single process without requiring any administrative interface.
         You configure, start, and stop KVLite using a command line interface.
         KVLite is intended for use by application developers who need to unit test their Oracle NoSQL
         Database application. It is not intended for production deployment, or for performance measurements.
    Per the documentation, you can use KVLite to test out the API calls in your performance benchmarking application, but you should not use to it perform the actual performance testing. For performance testing, please install and use the Oracle NoSQL Database server.

  • Execution Time Issue

    Help Please!!!
    I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
    Collects analog data from a cDAQ chassis with a 9205 at 5kHz
    Data is collected in 100ms chunks
    Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
    Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
    Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
    Where else can I look for where the time went? It doesn't seem to make sense.
    Thanks for reading!
    Tim
    Attachments:
    Screen Shot 2013-09-06 at 9.32.46 AM.png ‏87 KB

    You should probably increase how much data you write with a single Write to Text File.  Move the Write to Text File out of the FOR loop.  Just have the data to be written autoindex to create an array of strings.  The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
    Another idea I am having is to use another loop (yes another queue as well) for the writing of the file.  But you put the Dequeue Element inside of another WHILE loop.  On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever.  Any further iteration should have a timeout of 0.  You do this with a shift register.  Autoindex the read strings out of the loop.  This array goes straight into the Write to Text File.  This way you can quickly catch up when your file write takes a long time.
    NOTE:  This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Write all data on queue.png ‏16 KB

  • Different execution times for back ground jobs - why?

    We have few jobs scheduled and each time they run, we see a different execution times. Some times, it increases and some times it decreases steeply . What could be the reasons?
    Note:
    1. We have the same load of jobs on the system in all the cases.
    2. We haven't changed any settings at system level.
    3. Data is more or less at the same range in all the executions.
    4. We mainly run these jobs
    Thanks,
    Kiran
    Edited by: kiran dasari on Mar 29, 2010 7:12 PM

    Thank you Sandra.
    We have no RFC calls or other instances. Ours is a very simple system. We have two monster jobs, the first one for HR dataand second an extract program to update a Z table for BW loads.
    Our basis and admin confirmed that there are no network issues
    Note: We are executing these jobs over the weekend nights.
    Thanks,
    Kiran

  • Reduce execution time

    How to reduce the execution time of this code? 
    Loop at porder1.
    SELECT aufnr bstmg hsdat sgtxt bwart charg FROM mseg INTO
    (porder-aufnr,porder-bstmg,porder-hsdat,porder-sgtxt,porder-bwart,
    porder-charg)
                                             WHERE matnr = porder1-matnr AND
                                                   aufnr = porder1-aufnr AND
                                                   werks = porder1-pwerk AND
                                                   ( bwart = '101' OR
                                                     bwart = '102' ).
    Endselect.
    Endloop.
    Regards
    Praju .

    Hi prajwal.
    I would like to suggest,
    It is possible to reduce the time of execution by Increasing the number of fields in the WHERE clause as only those specific number of records are fetched which results in comparatively less execution time.
    Also, SAP has designed such powerfull tools like Transactions - ST05,  ST07, ST30, and many more
    I would like to suggest you a couple of references relating to your case,
    [SDN - Reference for Long execution time during processing of a select query|/thread/477540 [original link is broken];
    [SDN - Reference for Reducing the Execution time of the program - Tools|How can i reduce time of execution;
    [SDN - Reference for solutions to reduce the execution time of a program|How to reduce my query execution time?;
    Hope that's usefull.
    Good Luck & Regards.
    Harsh Dave

  • Query Execution Time for a Query causing ORA-1555

    dear Gurus
    I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
    But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
    My question -
    1. Is it possible to accurately find the query duration time besides the Alert Log file ?

    abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
    1- Tune the query. The faster a query runs the less likely a 1555 will occur.
    2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
    If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
    3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
    Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
    HTH -- Mark D Powell --
    dear mark
    Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
    I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
    regards
    abhishek

  • Execution times and other issues in changing TextEdit documents

    (HNY -- back to terrorize everyone with off-the-wall questions)
    --This relates to other questions I've asked recently, but that may be neither here nor there.
    --Basically, I want to change a specific character in an open TextEdit document, in text that can be pretty lengthy. But I don't want pre-existing formatting of the text to change.
    --For test purposes the front TextEdit document is simply an unbroken string of letters (say, all "q's") ranging in number from 5 and upwards. Following are some of the results I've gotten:
    --1) Using a do shell script routine (below), the execution is very fast, well under 0.1 second for changing 250 q's to e's and changing the front document. The problem is that the formatting of the first character becomes the formatting for the entire string (in fact, for the entire document if there is subsequent text). So that doesn't meet my needs, although I certainly like the speed.
    --SCRIPT 1
    tell application "TextEdit"
    set T to text of front document
    set Tnew to do shell script "echo " & quoted form of T & " | sed 's/q/e/g'"
    set text of front document to Tnew
    end tell
    --END SCRIPT 1
    --The only practical way I've found to change a character AND maintain formatting is the "set every character where it is "q" to "e"" routine (below). But, for long text, I've run into a serious execution speed problem. For example, if the string consists of 10 q's, the script executes in about 0.03 second. If the string is 40 characters, the execution is 0.14 second, a roughly linear increase. If the string is 100 characters, the execution is 2.00 seconds, which doesn't correlate to a linear increase at all. And if the string is 250 characters, I'm looking at 70 seconds. At some point, increasing the number of string characters leads to a timeout or stall. One interesting aspect of this is that, if only the last 4 characters (example) of the 250-character string are "q", then the execution time is again very quick.
    --SCRIPT 2
    tell application "TextEdit"
    set T to text of front document
    tell text of front document
    set every character where it is "q" to "e"
    end tell
    end tell
    --END SCRIPT 2
    --Any insight into this issue (or workaround) will be appreciated.
    --In the real world, I most often encounter the issue when trying to deal with spaces in long text, which can be numerous.

    OK, Camelot, helpful but maddening. Based on your response. I elected to look at this some more, even though I'm stuck with TextEdit on this project. Here's what I found, not necessarily in the order I did things:
    1) I ran your "repeat" script on my usual machine (2.7 PPC with 10.4.6)) and was surprised to consisently get about 4.25 seconds -- I didn't think it should matter, but I happened to run it with Script Debugger.
    2) Then, curious as to what a slower processor speed would do, I ran it at ancient history speed -- a 7500 souped up to 700 MHz. On a 10.4.6 partition, the execution time was about 17 seconds, but on a 10.3.6 partition it was only about 9.5 seconds. (The other complication with this older machine is that it uses XPostFacto to accommodate OS X.) And I don't have Script Debugger for 10.3.x, so I ran the script in Script Editor on that partition.
    3) That got me wondering about Script Editor vs. Script Debugger, so (using 10.4.6) I ran the script on both the old machine and my (fast) usual machine using Script Editor. On the old machine, it was somewhat faster at about 14 seconds. But, surprise!, on the current machine it took twice as long at 8.6 seconds. The story doesn't end here.
    (BTW, I added a "ticks" routine to the script, so the method of measuring time should be consistent. And I've been copying and pasting the script to the various editors, so there shouldn't be any inconsistencies with it. I've consistently used a 250-character unbroken string of the target in a TextEdit document.)
    4) Mixed in with all these trials, I wrote a script to get a list of offsets of all the target characters; it can be configured to change the characters or not. But I found some intriguing SE vs. SD differences there also. In tests on the fast machine, running the script simply to get the offset list (without making any changes), the list is generated in under a second -- but sometimes barely so. The surprise was that SE ran it in about half the time as SD. although SD was about twice as fast with a script that called for changes. Go figure.
    5) Since getting the offset list is pretty fast in either case, I was hoping to think up some innovative way of using the offset list to make changes in the document more quickly. But running a repeat routine with the list simply isn't innovative, and the result is roughly what I get with your repeat script coupled with an added fraction of a second for generating the list. Changing each character as each offset is generated also yields about the same result.
    My conclusion from all this is that the very fast approaches (which lose formatting) are changing the characters globally, not one at a time as occurs visibly with techniques where the formatting isn't lost. I don't know what to make of SE vs. SD, but I repeated the runs several times in each editor, with consistent results.
    Finally, while writing the offset list script, I encountered a couple AS issues that I've seen several times in the past (having nothing specifically to do with this topic), but I'll present that as a new post.
    Thanks for your comments and any others will be welcome.

  • Execution Time of Mapping

    Hi All,
    While we are executing Mapping multiple times through Control Centre in Local server.
    The first execution time of the mapping is taking less time than the second/third execution time of the same mapping.
    Thanks in Advance...

    Hi,
    The mapping execution will depend on a lot of db objects. In Dev environment there is no control and so the time may increase. Some of the factors may be the table avability i.e it may be used by another mapping and ur query may be on a wait, number of process running i.e ur process may have to wait for some other process to complete, multiple people using the tables, tables may not be analysed etc etc etc...so DEV running time may be entierly accurate. If there is a difference in time in a more controlled environment then we can analyze the reason.
    Regards
    Bharadwaj Hari

  • SAP BO Report Execution time and Record count

    Hi All,
    We have a requirement to set the limits on report execution time and record count centrally. Can you please help me to identify where exactly we have to define the settings in CMC for BO4.
    Thanks in advance,
    Shalini

    Hi Shalini,
    Please follow these steps, also check in for any more details if any;
    Step 1: Launch CMC
    Step 2: Select Servers
    Step 3: Select Web Intelligence processing Server, right click and Goto Properties
    Step 4: Maximum List Of Values Size (entries) default value is 50000.
    Step 5: Increase this value if your "LOVS" greater than this value.
    Step 6: Save and close.
    Step 7: Restart the server.
    Hope this helps.
    - Ram

  • CREATE TABLE/INDEX execution times

    Hello,
    what are my options to optimize table and/or index creation times?
    I have a script that creates something around 60 000 objects (maybe half index, half table) and while each operation takes no more than a second, this will result in a 17 hour execution time. So I'm looking for ways to decrease table creation by a fraction of a second :-/
    What I can think of is that all of these operations end up writing to the same datafile (e.g. SYSTEM01.DBF), could it do any good to divide the system tablespace into more data files? Adding a datafile would only increase the quota, so I would have to regroup the data?
    Or can I increase the number of redologs? Or temporarily add larger redolog files?
    Here is an extract to demonstrate:
    14:20:10 SQL> DROP TABLE PS_DOTL_PDS2_T2
    14:20:10   2  /
    Table dropped.
    14:20:11 SQL> CREATE TABLE PS_DOTL_PDS2_T2 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
    14:20:11   2           BUSINESS_UNIT VARCHAR2(5) NOT NULL,
    14:20:11   3           PO_ID VARCHAR2(10) NOT NULL,
    14:20:11   4           LINE_NBR INTEGER NOT NULL,
    14:20:11   5           SCHED_NBR SMALLINT NOT NULL,
    14:20:11   6           DISTRIB_LINE_NUM INTEGER NOT NULL,
    14:20:11   7           BUSINESS_UNIT_REQ VARCHAR2(5) NOT NULL,
    14:20:11   8           REQ_ID VARCHAR2(10) NOT NULL,
    14:20:11   9           REQ_LINE_NBR INTEGER NOT NULL,
    14:20:11  10           REQ_SCHED_NBR SMALLINT NOT NULL,
    14:20:11  11           REQ_DISTRIB_NBR INTEGER NOT NULL,
    14:20:11  12           ACCOUNT VARCHAR2(10) NOT NULL,
    14:20:11  13           OPERATING_UNIT VARCHAR2(8) NOT NULL,
    14:20:11  14           PRODUCT VARCHAR2(6) NOT NULL,
    14:20:11  15           FUND_CODE VARCHAR2(5) NOT NULL,
    14:20:11  16           CLASS_FLD VARCHAR2(5) NOT NULL,
    14:20:11  17           PROGRAM_CODE VARCHAR2(5) NOT NULL,
    14:20:11  18           BUDGET_REF VARCHAR2(8) NOT NULL,
    14:20:11  19           AFFILIATE VARCHAR2(5) NOT NULL,
    14:20:11  20           AFFILIATE_INTRA1 VARCHAR2(10) NOT NULL,
    14:20:11  21           AFFILIATE_INTRA2 VARCHAR2(10) NOT NULL,
    14:20:11  22           CHARTFIELD1 VARCHAR2(10) NOT NULL,
    14:20:11  23           CHARTFIELD2 VARCHAR2(10) NOT NULL,
    14:20:11  24           CHARTFIELD3 VARCHAR2(10) NOT NULL,
    14:20:11  25           PROJECT_ID VARCHAR2(15) NOT NULL,
    14:20:11  26           ALTACCT VARCHAR2(10) NOT NULL,
    14:20:11  27           DEPTID VARCHAR2(10) NOT NULL,
    14:20:11  28           MONETARY_AMOUNT DECIMAL(26, 3) NOT NULL,
    14:20:11  29           DISTRIB_AMT DECIMAL(26, 3) NOT NULL,
    14:20:11  30           PO_DT DATE,
    14:20:11  31           CURRENCY_CD VARCHAR2(3) NOT NULL,
    14:20:11  32           KK_CLOSE_PRIOR VARCHAR2(1) NOT NULL,
    14:20:11  33           PO_STATUS VARCHAR2(2) NOT NULL,
    14:20:11  34           MID_ROLL_STATUS VARCHAR2(1) NOT NULL,
    14:20:11  35           CURRENCY_CD_BASE VARCHAR2(3) NOT NULL,
    14:20:11  36           RT_TYPE VARCHAR2(5) NOT NULL) TABLESPACE PSAPP STORAGE (INITIAL
    14:20:11  37   40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
    14:20:11  38   PCTUSED 80
    14:20:11  39  /
    Table created.
    14:20:11 SQL> CREATE UNIQUE  iNDEX PS_DOTL_PDS2_T2 ON PS_DOTL_PDS2_T2
    14:20:11   2   (PROCESS_INSTANCE,
    14:20:11   3           BUSINESS_UNIT,
    14:20:11   4           PO_ID,
    14:20:11   5           LINE_NBR,
    14:20:11   6           SCHED_NBR,
    14:20:11   7           DISTRIB_LINE_NUM,
    14:20:11   8           BUSINESS_UNIT_REQ,
    14:20:11   9           REQ_ID,
    14:20:11  10           REQ_LINE_NBR,
    14:20:11  11           REQ_SCHED_NBR,
    14:20:11  12           REQ_DISTRIB_NBR) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT
    14:20:11  13   100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL
    14:20:11  14   NOLOGGING
    14:20:11  15  /
    Index created.
    14:20:11 SQL> ALTER INDEX PS_DOTL_PDS2_T2 NOPARALLEL LOGGING
    14:20:11   2  /
    Index altered.
    14:20:11 SQL> DROP TABLE PS_DOTL_PDS2_T3
    14:20:11   2  /
    Table dropped.
    14:20:12 SQL> CREATE TABLE PS_DOTL_PDS2_T3 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
    14:20:12   2           BUSINESS_UNIT VARCHAR2(5) NOT NULL,
    14:20:12   3           PO_ID VARCHAR2(10) NOT NULL,
    14:20:12   4           LINE_NBR INTEGER NOT NULL,
    14:20:12   5           SCHED_NBR SMALLINT NOT NULL,
    14:20:12   6           DISTRIB_LINE_NUM INTEGER NOT NULL,
    14:20:12   7           BUSINESS_UNIT_REQ VARCHAR2(5) NOT NULL,
    14:20:12   8           REQ_ID VARCHAR2(10) NOT NULL,
    14:20:12   9           REQ_LINE_NBR INTEGER NOT NULL,
    14:20:12  10           REQ_SCHED_NBR SMALLINT NOT NULL,
    14:20:12  11           REQ_DISTRIB_NBR INTEGER NOT NULL,
    14:20:12  12           ACCOUNT VARCHAR2(10) NOT NULL,
    14:20:12  13           OPERATING_UNIT VARCHAR2(8) NOT NULL,
    14:20:12  14           PRODUCT VARCHAR2(6) NOT NULL,
    14:20:12  15           FUND_CODE VARCHAR2(5) NOT NULL,
    14:20:12  16           CLASS_FLD VARCHAR2(5) NOT NULL,
    14:20:12  17           PROGRAM_CODE VARCHAR2(5) NOT NULL,
    14:20:12  18           BUDGET_REF VARCHAR2(8) NOT NULL,
    14:20:12  19           AFFILIATE VARCHAR2(5) NOT NULL,
    14:20:12  20           AFFILIATE_INTRA1 VARCHAR2(10) NOT NULL,
    14:20:12  21           AFFILIATE_INTRA2 VARCHAR2(10) NOT NULL,
    14:20:12  22           CHARTFIELD1 VARCHAR2(10) NOT NULL,
    14:20:12  23           CHARTFIELD2 VARCHAR2(10) NOT NULL,
    14:20:12  24           CHARTFIELD3 VARCHAR2(10) NOT NULL,
    14:20:12  25           PROJECT_ID VARCHAR2(15) NOT NULL,
    14:20:12  26           ALTACCT VARCHAR2(10) NOT NULL,
    14:20:12  27           DEPTID VARCHAR2(10) NOT NULL,
    14:20:12  28           MONETARY_AMOUNT DECIMAL(26, 3) NOT NULL,
    14:20:12  29           DISTRIB_AMT DECIMAL(26, 3) NOT NULL,
    14:20:12  30           PO_DT DATE,
    14:20:12  31           CURRENCY_CD VARCHAR2(3) NOT NULL,
    14:20:13  32           KK_CLOSE_PRIOR VARCHAR2(1) NOT NULL,
    14:20:13  33           PO_STATUS VARCHAR2(2) NOT NULL,
    14:20:13  34           MID_ROLL_STATUS VARCHAR2(1) NOT NULL,
    14:20:13  35           CURRENCY_CD_BASE VARCHAR2(3) NOT NULL,
    14:20:13  36           RT_TYPE VARCHAR2(5) NOT NULL) TABLESPACE PSAPP STORAGE (INITIAL
    14:20:13  37   40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
    14:20:13  38   PCTUSED 80
    14:20:13  39  /
    Table created. It's a PeopleSoft database, during one of the upgrade steps, running on Oracle 11.2.0.3, Windows patchset #17 I believe. (Win2008R2_64)
    As always any input or references are greatly appreciated.
    Best regards.

    Hi,
    See bellow. You can create deferred segment creation option of Oracle. Oracle will not spend time on extent allocation and can save you enormous amount of time overall.
    http://www.oracle-base.com/articles/11g/segment-creation-on-demand-11gr2.php
    What I can think of is that all of these operations end up writing to the same datafile (e.g. SYSTEM01.DBF), could it do any good to divide the system tablespace into more data files? >Adding a datafile would only increase the quota, so I would have to regroup the data?Why giving example of SYSTEM01.DBF? You should not be using system tablespace. Having multiple datafiles will not be helping you.
    What do you mean by regroup of data?
    Salman
    Edited by: Salman Qureshi on Apr 10, 2013 4:02 PM

Maybe you are looking for

  • HELP !!!!!!! i new to OS X Server and need help setting up WINS (in DHCP)

    i Need help on how to setup the WINS part of DHCP can any one help ? what should my settings be for ? WINS/NBNS PRIMARY SERVER: WINS/NBNS SECONDARY SERVER: NBDD SERVER: NBT NODE TYPE: netBIOS SCOPE ID: i have DNS, SMB, AFP, Open Directory all setup j

  • Cant connect IPOD to linksys WRT54G router

    Hi, i have a LINKSYS  WRT54G router .. i bought an apple ipod touch recently  and i am unable to connect it to my wifi network. My ipod detects the wifi network but when i select it , it asks for password. now i dont know which password is this bcoz

  • Losing music when ejecting

    Hi, can someone help me. I have a 160GB ipod classic bought in October. All has been fine until now. I have it connected to pc in tunes, i drag and drop some music from my itunes library into my ipod. the music shows up as being in there. I then ejec

  • Scanning into photoshop CS5.1

    Photoshop CS5 does not list my scanner in the File/ Import menu. Is it a twain problem. Anybody know of a fix?

  • Negative image

    Somehow my son inadvertently set our iMac display to a 'negative' image by simply hitting keys on the keyboard, he never went into any control panel or adjusted any settings. I have no idea how to reverse this so we can see the normal 'positive' imag