How to reduce project load time?
I am currently working on a large project (25+ Mb, thousands of vis) with something on the order of 100 classes. Opening the project is painfully slow. It usually takes 7-8 minutes. I'd like to figure out a way to reduce the load time.
What is LV's behavior regarding loading project file vis and libraries (lvlib, lvclass, etc.)?
Are all items in the project automatically loaded when the project is opened?
Are items not in the project but used as sub-vis automatically loaded?
Are dynamically linked items not included in the project automatically opened? (I imagine not.)
Are dynamically linked items that are included in the project automatically opened?
Thanks,
Dave
Hello,
I have talked to multiple people about your issue and we honestly believe that you you are experiencing is expected behavior based on the size of the project and number of VIs used. Creating a new project will sometimes help when a project file has a corruption within it. Since you are experiencing the same behavior on multiple computers, any difference in load time will be a function of the speed, RAM, etc. of that particular computer as compared to your machine. In saying that the files are technically loaded, I am saying that their locations are known so that when they are called, they can be loaded into memory as needed. It does not have to load any portion of the VI other than the location on disk. I would not expect either of those methods to reduce your project load time. Switching to dynamic linking only changes the order in which LabVIEW searches for the files. As I stated before, files added to the project are defined by the name and location on the disk. You can think of the project file as an XML file with different pieces of information about the files. If the file cannot be found, it will try to dynamically link the files by searching for them on the disk. When this happens, it goes through a seris of locations where the files might be located. When the project is loaded, it looks for the files that are listed in the project file first. I hope this helps.
-Zach
Certified LabVIEW Developer
Similar Messages
-
Hi All,
In my page i have 2 portlets...
The first portlet takes 10 seconds to render and the second portlet takes 8 seconds to render. Therefore, your page is taking more than 18 seconds to render. I want render these portlets in <= 12 seconds ...
Please give any suggestions..
Thanks in advance...Try using AJAX to load the portlets
-
How to reduce smartform activation time
Hi, All
i create a new smartform and just activate without filling any value in form , smartfrom activated within 1 second properly
now again when i activate same smartform it take nearly 20-30 minutes for activation .
Does anyone know how to reduce this activation time of smartformHello,
strange problem.
I think you have contact your basis collegues and maybe is helpful to look this transaction : STATTRACE
Regards
roberto -
How to reduce dso activation time
hi
can any body explain me how to reduce dso activation time.I am having a dso with 5 crore records and it is taking 7 to 8 hrs for activation.Hi,
Try this.
T.code RSCUSTA2 . Here you can change the no.of records and no.of processes to use for activation job with this you can increase your ODS activation process, here the no.of processes can be increased based on no.of processes available in your server, check in SM50 or SM51.
ODS Query Performance
Thanks,
JituK -
How to change project loading theme
Hi There,
Can somebody please suggest me how to change project loading theme in captivate 6.
Please refer below image, I want this loading screen.
Looking forward for help.
Thanks,
SrikanthRodWard,
Thank you very much.
Regards,
Srikanth. -
How can i reduce applet loading time?
I have recently begun converting a gui application to an applet. The problem i have is the loading time of the applet which can be several minutes.
The gui has a progress bar which tracks the loading of the classes, images and sounds etc, once this is up and running the loading time is fairly short, however it takes forever for the gui to actually begin to display.
The code for the classes is only about 150K altogether, there seems to a period of long modem inactivity once the applet is initialized before the gui is displayed.
How can i reduce the time it takes for the applet to initialize before the gui is displayed, otherwise users will thing nothing is happening and not bother loading it.
the applet is currently at http://www.winnieinternet.com/games/startrade2095/applet/startrade2095.htm
if you need a demo of the problem, although the applet is still work in progress.
Many thanks in advance for any help
W.Coleman
www.winnieinternet.comSome suggestions could be:
1. Bundle all classes and resource files in a jar file.
2. Try to preload the heavier files (e.g. sound files) in a background thread, instead of init() method. See an example for this in Sun's Java tutorial, under the trail 'Sound'. -
What r the ways to reduce the load time of master data
Hi
The full load of the master data load IP is taking 18hr to 2 days loading on average 2516360 Records
Is their any option to reduce load time as its causing data base lock and impacting transports(other than data selection)
Thanks in advance
AnujYou will have to do some research. What MD extractor are you talking about?
Test on R/3 system: First try to extract a considerable amount of records via transaction RSA3 in R/3 (10.000 or 100.000). Keep the time that it takes.
Extract the same into BW but only into PSA. Again measure the time.
Load data from PSA into datatarget and see how long this takes. You should now have a picture on where the performance problems are located.
Is the performance also bad for small loads or is there a boundary before which performance is ok. (in other words, is loading 200.000 records ten times longer then loading 20.000 records?)
Suspect the following in R/3:
- datasource enhancements in R/3. A redesign may improve your extraction in a big way.
- missing indexes. If you are extracting data from tables without proper indexes the extraction performance can be dramatic.
Suspect the following if loading to PSA is bad:
- Datapackage reads data in small chunks (much smaller than 50000 records). Overhead causes more time than the actual data transport.
- Network problems. Maybe the network is congested by other servers?
If loading from PSA to datatarget is slow:
- Check start routines for performance.
- Are enough batch partitions available? Sometimes activation of ODS can be improved by more parallel processes.
- Are you using a lot masterdata-lookups when filling the datatargets?
When you report you findings in this forum we may be able to help you further. -
How to tune data loading time in BSO using 14 rules files ?
Hello there,
I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
In a nightly process using MAXL i load new data into one Essbase-cube.
In this nightly update process 14 account-members are updated by running 14 rules files one after another.
These rules files connect 14 times by sql-connection to the same oracle database and the same table.
I use this procedure because i cannot load 2 or more data fields using one rules file.
It takes a long time to load up 14 accounts one after other.
Now my Question: How can I minimise this data loading time ?
This is what I found on Oracle Homepage:
What's New
Oracle Essbase V.11.1.1 Release Highlights
Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
In an Older Thread John said:
As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
Example:
import database AsoSamp.Sample data
connect as TBC identified by 'password'
using multiple rules_file 'rule1','rule2'
to load_buffer_block starting with buffer_id 100
on error write to "error.txt";
But this is for ASO Option only.
Can I use it in my MAXL also for BSO ?? Is there a sample ?
What else is possible to tune up nightly update time ??
Thanks in advance for every tip,
ZeljkoThanks a lot for your support. I’m just a little confused.
I will use an example to illustrate my problem a bit more clearly.
This is the basic table, in my case a view, which is queried by all 14 rules files:
column1 --- column2 --- column3 --- column4 --- ... ---column n
dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
Following 3 out of 14 (load-) rules files to load the data columns into the cube:
Rules File1:
dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
Rules File2:
dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
Rules File14:
dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
Please advise.
Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
Account --- Region -- ID --- Product --- data
sales --- West --- D1 --- Coffee --- 11001
sales --- West --- D2 --- Tea10 --- 12011
sales --- West --- S1 --- Tea10 --- 14115
sales --- West --- C3 --- Tea10 --- 21053
sales --- East ---- S2 --- Coffee --- 15563
sales --- East ---- D1 --- Tea10 --- 15894
sales --- East ---- D3 --- Coffee --- 19689
sales --- East ---- C1 --- Coffee --- 18897
sales --- East ---- C3 --- Tea10 --- 11699
cogs --- West --- D1 --- Coffee --- 1,322
cogs --- West --- D2 --- Tea10 --- 1,325
cogs --- West --- S1 --- Tea10 --- 1,699
cogs --- West --- C3 --- Tea10 --- 1,588
cogs --- East ---- S2 --- Coffee --- 1,458
cogs --- East ---- D1 --- Tea10 --- 1,664
cogs --- East ---- D3 --- Coffee --- 1,989
cogs --- East ---- C1 --- Coffee --- 1,988
cogs --- East ---- C3 --- Tea10 --- 1,328
discounts --- West --- D1 --- Coffee --- 10789
discounts --- West --- D2 --- Tea10 --- 10548
discounts --- West --- S1 --- Tea10 --- 10145
discounts --- West --- C3 --- Tea10 --- 10998
discounts --- East ---- S2 --- Coffee --- 10991
discounts --- East ---- D1 --- Tea10 --- 11615
discounts --- East ---- D3 --- Coffee --- 15615
discounts --- East ---- C1 --- Coffee --- 11898
discounts --- East ---- C3 --- Tea10 --- 12156
amount --- West --- D1 --- Coffee --- 548
amount --- West --- D2 --- Tea10 --- 589
amount --- West --- S1 --- Tea10 --- 852
amount --- West --- C3 --- Tea10 --- 981
amount --- East ---- S2 --- Coffee --- 876
amount --- East ---- D1 --- Tea10 --- 156
amount --- East ---- D3 --- Coffee --- 986
amount --- East ---- C1 --- Coffee --- 256
amount --- East ---- C3 --- Tea10 --- 9896
And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
I just want to be sure that I understand your suggestions.
Many thanks for awesome help,
Zeljko -
How to reduce the fetch time of this sql?
Here is the SQL, three-table join and joining conditions are:
ms_ets_cntrl.ims_cntrt_oid=ims_alctn.ims_alctn_oid
ims_alctn.ims_trde_oid=ims_trde.ims_trde_oid
SELECT 'MCH' Type, ims_ets_cntrl.STTS tp_stts, count(*) Count FROM ims_ets_cntrl where ims_ets_cntrl.ims_cntrt_oid in
(select ims_alctn.ims_alctn_oid FROM ims_alctn, ( select ims_trde.ims_trde_oid from ims_trde
where ( IMS_TRDE.IMS_TRDE_RCPT_DTTM >= TO_DATE('10/29/2009 00:00', 'MM/DD/YYYY HH24:MI') AND IMS_TRDE.IMS_TRDE_RCPT_DTTM <= (TO_DATE('11/5/2009 23:59', 'MM/DD/YYYY HH24:MI')) )
AND (IMS_TRDE.GRS_TRX_TYPE IN ('INJECTION','WITHDRAWAL','PAYMENT') OR IMS_TRDE.SSC_INVST_TYPE = 'FC' AND 1=1 and IMS_TRDE.SERVICE_TYPE='FS' )) TRDE
where IMS_ALCTN.IMS_TRDE_OID=TRDE.IMS_TRDE_OID) and ims_ets_cntrl.outbnd_dest = 'ETD' group by ims_ets_cntrl.STTSOptimizer and related parameter info:
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL>select pname, pval1, pval2 from sys.aux_stats$ where sname='SYSSTATS_INFO';
DSTART 11-16-2009 10:23
DSTOP 11-16-2009 10:23
FLAGS 1
STATUS NOWORKLOADHere is autotrace output:
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (GROUP BY)
2 1 VIEW
3 2 SORT (UNIQUE)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'IMS_ETS_CNTRL'
5 4 NESTED LOOPS
6 5 NESTED LOOPS
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'IMS_TRDE'
8 7 INDEX (RANGE SCAN) OF 'IMS_TRDE_INDX4' (NON- UNIQUE)
9 6 TABLE ACCESS (BY INDEX ROWID) OF 'IMS_ALCTN'
10 9 INDEX (RANGE SCAN) OF 'IMS_ALCTN_INDX1' (NON -UNIQUE)
11 5 INDEX (RANGE SCAN) OF 'IMS_ETS_CNTRL_INDX1' (NON -UNIQUE)
Statistics
0 recursive calls
0 db block gets
244608 consistent gets
58856 physical reads
0 redo size
497 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processedHere is TKPROF output:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 4.85 129.72 53863 244608 0 1
total 4 4.85 129.72 53863 244608 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63
Rows Row Source Operation
1 SORT GROUP BY
12972 VIEW
12972 SORT UNIQUE
12972 TABLE ACCESS BY INDEX ROWID IMS_ETS_CNTRL
46236 NESTED LOOPS
19134 NESTED LOOPS
19744 TABLE ACCESS BY INDEX ROWID IMS_TRDE
176922 INDEX RANGE SCAN IMS_TRDE_INDX4 (object id 34099)
19134 TABLE ACCESS BY INDEX ROWID IMS_ALCTN
19134 INDEX RANGE SCAN IMS_ALCTN_INDX1 (object id 34094)
27101 INDEX RANGE SCAN IMS_ETS_CNTRL_INDX1 (object id 34101)
********************************************************************************Explain plan output:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT GROUP BY | | | | |
| 2 | VIEW | | | | |
| 3 | SORT UNIQUE | | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID | IMS_ETS_CNTRL | | | |
| 5 | NESTED LOOPS | | | | |
| 6 | NESTED LOOPS | | | | |
|* 7 | TABLE ACCESS BY INDEX ROWID| IMS_TRDE | | | |
|* 8 | INDEX RANGE SCAN | IMS_TRDE_INDX4 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID| IMS_ALCTN | | | |
|* 10 | INDEX RANGE SCAN | IMS_ALCTN_INDX1 | | | |
|* 11 | INDEX RANGE SCAN | IMS_ETS_CNTRL_INDX1 | | | |
Predicate Information (identified by operation id):
4 - filter("IMS_ETS_CNTRL"."OUTBND_DEST"='ETD')
7 - filter("IMS_TRDE"."GRS_TRX_TYPE"='INJECTION' OR "IMS_TRDE"."GRS_TRX_TYPE"='WITHD
RAWAL' OR "IMS_TRDE"."GRS_TRX_TYPE"='PAYMENT' OR "IMS_TRDE"."SSC_INVST_TY
PE"='FC' AND "IMS_TRDE"."SERVICE_TYPE"='FS')
8 - access("IMS_TRDE"."IMS_TRDE_RCPT_DTTM">=TO_DATE('2009-10-29 00:00:00', 'yyyy-mm-
dd hh24:mi:ss') AND "IMS_TRDE"."IMS_TRDE_RCPT_DTTM"<=TO_DATE('2009-11-05
23:59:00', 'yyyy-mm-dd hh24:mi:ss')
10 - access("IMS_ALCTN"."IMS_TRDE_OID"="IMS_TRDE"."IMS_TRDE_OID")
11 - access("IMS_ETS_CNTRL"."IMS_CNTRT_OID"="IMS_ALCTN"."IMS_ALCTN_OID")
Note: rule based optimizationCould you please help tune this sql?
How can I reduce the elapsed time? How can I reduce query read?
If there is any other info that you need, please let me know!
thank you very much!What exactly is this logic meant to do?
AND (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
OR ims_trde.ssc_invst_type = 'FC'
AND ims_trde.service_type = 'FS')is that really:
AND (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
OR ims_trde.ssc_invst_type = 'FC')
AND ims_trde.service_type = 'FS'or is it maybe:
AND (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
OR (ims_trde.ssc_invst_type = 'FC'
AND ims_trde.service_type = 'FS'))? -
How to reduce the cloning time if using cold backup?
Hi,
We are using EBS r12 (12.0.6) with database 10.2.0.3 in Linux Redhat 4 32-bit envoirnment.
Our datbase size is around 480 GB and we are facing the issues to provide the clone to my consultants for meet the target timelines in given time.
(Source:)---- Production Dell R900 Server machine having 32-GB RAM and 8 Quard Core CPU's.
(Target:)---- Clone Dell 2950 Server machine having 16-GB RAM and 4-Quard Core CPU's.
Currectly we are taking cold backup like:
1: Auto shutdown EBS r12 database nighthy at 12:00 AM daily and compress the backup using tar utility in linux. This process takes 6:00 hours aprox:
2: After that we move compressed file to clone machine and then uncompressed it, and this process takes 5:00 hours aprox.
3: And performe standard cloning steps.
Questions:
1: How to reduce time of this backup process?
3: Is there any other way to reduce the cloning process.
2: What type of backup oracle recommended to their customers for this type of process?
Thanks.1: How to reduce time of this backup process?Without using third party tools, it might be hard to tune the timing of compressing/uncompressing the file.
Have you tried to use scp command? This would help if your network throughput is acceptable.
3: Is there any other way to reduce the cloning process.Since the main issue you have with the copy, then you might copy the files remotely from the source to the target, or using any other storage/backup tools (like file system snapshot).
2: What type of backup oracle recommended to their customers for this type of process?Oracle does not recommend any type of backup as the tools used are not Oracle products.
Thanks,
Hussein -
How to reduce Oracle Cluster Time Synchronization time to 0
Hi Guys,
How to reduce Offset (in msec) of Oracle Cluster Time Synchronization time to 0 ???
is there any command or is it the only solution to wait to become 0 slowly...
[root@caslive bin]# ./crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): -1300
regards,
manish
Email ID: [email protected]Hi,
1.From DB02--> detail analysis menu you can take out top 50 tables & indexes. you can mark the size of the index should be generally less than the table.If its more or very much similar size of table, you can rebuild it using SE14. This can free some space.
or else you may use brspace to do this.
http://help.sap.com/saphelp_nw70/helpdata/EN/58/6bec38c9fa7e44b7f2163905863575/frameset.htm
In case of table this option is risky as it may result in data loss.
2. Earlywatch alert gives the top 20 degenrated index. you can check that which also gives a factor 'storage quality'.
3. Run report SAP_DROP_TMPTABLES. It removes temporary database objects. ( we do this in our BW system)
Hope this helps
Thanks
Sushil -
How to reduce Post-Processing time in oracle applications R12?
Hi All,
I ran a XMLP concurrent program at 16-APR-2015 11:23:52 . It's completed at 16-APR-2015 11:30:41. So it took 6 minute 49 seconds. I check the query in sql developer, it's executing less than two minutes. When I check the log file of that program shows the following info...
Beginning post-processing of request 22226009 on node DEV at 16-APR-2015 11:24:43.
Post-processing of request 22226009 completed at 16-APR-2015 11:30:41.
So what is the reason?, and how to reduce the time?HTML is a simple format (Text).
Excel is more complicated and requires more time to generate. The same applies for Word and PDF output.
Maybe the processing power and memory of the server are not high and this does not help generate the output faster.
I have a workaround for you, generate an XML file (which should finish faster) and open it in Excel. -
How to improve the load time of my swf group
Hi,
I need help to have some tricks to improve my load time on my swf captivate online traning. My training has 6 sections and it takes 3 minutes to download each time I open the window of the training. It takes too much time and if there are 50 users at the same time, it will take lots of my website bandwidth. Do you have any tips on captivate settings or other tips to help reduce my training download time? I do not understand why the 6 modules loading simultaneously and not every time I click to start a new part of training.
Can you help me with my problem?
Thank youBryan,
If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
LabVIEW Champion . Do more with less code and in less time . -
How to reduce processor load?
My book is getting big, 200+ pages with lots of drawings, etc. It is eating up a lot of processor power and running slowly. What are some things I can do to reduce the load? Is there some way to freeze a range of pages?
WestB2012 wrote:
Hi Peter, I have Windows managing the virtual memory. I have 7+GB free on my drive and I recently defragged.
I ended up just resaving under a different name and deleting all but the chapters I was working on, which is working better.
However, when I import it back, I will have problems with the delay again.
Saving a file to a new name relieves most applications of the effort and memory needs to track changes to the file; the normal save process usually emulates what you might do on paper - circle content that's moved and draw a line to the new location, mark content to delete, write in the margins and mark with insertion carats, for new material. Save as does all those tasks, cleaning up the file internally, so there's less work to do. New saves starts the process over.
2GB RAM is marginal - parts of running applications, and current content in those applications is temporarily stored in RAM, if there's enough, or in virtual memory (really written to disk). It's faster to store and retrieve completely or mostly in RAM; the less virtual memory (disk) used, the faster. Also, with virtual memory, the faster the disk drive (7200, or 10,000, or 15,000 RPM, vs. 5200 (the most common in older and low-end systems), the faster virtual memory processing will be.
Working with large amounts of material in individual files and managing them with the InDesign book feature reduces the processing load, because you only have a few moderate-sized files open, not the whole humongous file.
7GB once was considered "lots of space." When I bought an expensive box of ten single-sided 5.25' floppies, each able to store 85KB, with my first computer, I thought I'd never need any more back-up storage. When I moved to an affordable (~$400) HUGE 20MB hard drive after a couple of years, I had hundreds of floppies, many of which I had "notched" to make them able to convince the disk drives that they were double-sided disks that could be inserted in the single-sided drive with one or the other side up, to gain double capacity.
The point here is that in your 7GB of free space, you need to store the virtual memory, and any data that the application normally writes temporarily to disk in order to perform its tasks, and any temporary storage that Windows needs as well. Perhaps 2x, 3x, or more times the total size of all active files in the active application is needed. Unlike Photoshop, Illustrator, and other applications, InDesign doesn't offer the ability to specify a separate "scratch disk," on which to do this background work; in a sense, it's like a multiple processor CPU, in that it divides the work. Without a scratch disk, the single drive has to alternate between storing and retrieving material - store, retrieve, store, retrieve, etc. IOW, SLOW. It would be a great feature enhancement request; you can enter it formally at Adobe - Feature Request/Bug Report Form.
HTH
Regards,
Peter
Peter Gold
KnowHow ProServices -
How to reduced instance recovery time
Hello Friends how are you all. I hope you all will be fine. friends I want to reduced the instance recovery time but i don't know which views or parameters which could helpful for me. Please tell me the way how can i reduced the instance recovery time.
Thanks & Best WishesHi,
Reduced instance recovery time
check ur DB size, is it connected many users.
my advice is do not set the reduced instance recovery time, it may be decreased run-time peformance.
the following procedure
-) Frequent checkpoints
-) reduce instance recovery time
-) set FAST_START_MTTR_TARGET parameter
-) size the Online redo log files
-) Implent manual checkpoints
-) reduce log_buffer size
-) Decrease run-time performance
Maybe you are looking for
-
Product ID and Material Number mapping
Hi All, I would like to request for your help on this. How can I be able to find the material number in R/3 using the product ID in CRM? Currently, I am using the description of the Product to search in R/3 but that is not always the case. Is there a
-
well - for whatever reason I have a total mess. Have got an iMAC (i.e. Master), a PowerBook and a MacBook. All e-mails from several years should be in mail on iMAC; now - if clicking on a e-mail heading let's say from last year a blank screen will op
-
If anyone can help, please do - I've been at this for 8 hours now and am no further forward! I'm trying to create a form with about 10 fields (2 of which are drop down lists) that users will enter data into over an intranet. The data should then go i
-
I was given an ibook that needs to be restored, the dvd player does not work.. Is there a way to format with out disks? Help would be so appreciated. Thanks
-
Search this forum for this topic. But all hints are useless. Finally i wipe the disk and installed CS 4 over a fresh installed Mac OS 10.6 and the crashes persist. I can work the whole day without any probs. Bur when i quit AI CS4 it crashes every da