DRM 11.1.1.3 Performance benefit to making a version 'submitted'?

Hi All:
Just a quick question.  Are there any performance/storage benefits in DRM 11.1.1.x to changing a version status to submitted? What I am aiming at here is ideally maintaining some prior year versions as required by law but keeping their resource use at a minimum.
Agnete

Hi Agnete,
Changing the version status will let you know who can edit the version content, working version status will let you update the version content no matter you are the owner of the version, but when the status is submitted then it allows only owner of the version and other users with Data manager role.
yes of course these are one of the performance tuning techniques but not about storage.
When you are using those earlier versions for only reference then you can expired the version, so that one will get access to modify.

Similar Messages

  • Oracle DRM 11.1.2.2 performance issue

    Hi,
    Is there any guide to fine tune DRM performance for version 11.1.2.2.302?
    We are running a project DRM from version 9.3.1 to version 11.1.2.2, and we found performance
    issue with 11.1.2.2, which a data import that previously run under 2 hours in version 9.3.1, now run more than 24 hours
    in version 11.1.2.2.302.
    We would like to find out whether there is any setting we may need to update, e.g.
    memory, IIS, database, etc. Or anything we need to have a look to find out the issue.
    We use SQL Server for the database and run an import command:
    [ImportCOEMetadata]
    Operation=Import
    InFile="E:\inbound\FileToImport.txt"
    ImportAbbrev="Import Metadata"
    ImportLogFilename="E:\logs\lImportResult.log"
    VersionAbbrev="Hier_Metadata"
    AutoSave=False
    Thanks.

    Hi Richard,
    We only have 98 thousands lines with 29 properties.
    There is an adjustment made on machine.config as follows:
    <system.web>
    <processModel autoConfig="true" responseDeadlockInterval="01:00:00" maxWorkerThreads="100" maxIoThreads="100" restartQueueLimit="500" memoryLimit="25" idleTimeout="infinite" />
    <httpHandlers />
    This helps improve other import which is smaller, but with this one it's just running forever.
    If you could share any setting we need to adjust / configure like timeout or memory or database, will be helpful.
    Thanks.

  • Is there any performance benefit in placing the LR Catalogue on an SSD

    I currently have the LR Catalogue [16 Gb] on a WD Black HDD and read/write is OK but would there be any benefit in using an SSD?
    My DNGs are stored on an HDD and I doubt any significant improvements would be noticed there [plus SSD's don't have the capacity yet].
    Any thoughts/experiences please ?
    LJ

    Lightroom is pretty quick off the mark in terms of its read/write capability but it definitely does get slower [on HDD] when you've amassed 10K or more catalogued items.
    Completely untrue, and if you believe this, it will cause you to do very unnecessary and harmful things to your catalog. People in this and other forum report catalogs of over a quarter of a million images running well. I myself have much fewer images, but still 25,000 images, without catalog slowness.
    If you are experiencing slowness, it is almost definitely not due to the number of images; and you might want to describe in more detail what part(s) of Lightroom are slow, so that a better diagnosis and solution can be found.

  • ExpressCache & Outlook OST/PST - any performance benefit?

    Hey Guys,
    Any ideas on whether ExpressCache helps with Outlook OST/PST access times?
    These files that range in size from 1-5GB have a huge impact on Outlook usability when it comes to access times and I'm wondering if it's worth removing ExpressCache and partitioning the mSATA hard drive to manually place the Outlook OST and other often accessed data files on the faster partition.
    Any thoughts?

    Hello Joe,
    You can use microsoft utility scanpst.exe or scanost.exe. You can also use exmerge.exe or Recovery storage group to recover corrupted file.
    scanpst.exe. Also called Inbox Repair Tool. This is a free tool installed with your Outlook. It can fix most of the minor errors and corruptions in your PST files. More detailed information can be found athttp://support.microsoft.com/kb/287497
    Try the Recoveryfix for Outlook PST Repair Software, is good to recover all your corrupted Outlook emails like calendars, tasks, appointments, deleted and missing emails and you also seen view the preview of recovered items. It will definitely helps you to
    recover & copy your Outlook archive file. Download from http://www.repairostfile.net
    Thanks

  • Is there any benefit to 64-bit version when I only have 4GB of RAM?

    My laptop only has 4GB of RAM so I am wondering if there is any benefit to installing 64-bit version? Besides I have an AMD Athlon II P340 Dual-Core 64-bit processor which isn't very powerful so I am not even in the best spot to take advantage of 64-bit. For those wondering, I am using Windows 7 64-bit.
    Although I am posting this in Photoshop, I am in fact installing most of the Adobe CS6 Master Collection so there is a chunk of space to be saved. Also I don't tend to use plugins, only OnOne which are 64-bit. So should I avoid installing the 64-bit version altogether? or should I install 64-bit and get rid of 32-bit? What is your opinion?

    That's entirely up to you - if you're SURE you'll not need any 32 bit plug-ins then you're probably going to be okay.  The 64 bit version does everything.
    I personally DO have some 32 bit only plug-ins and I occasionally do use the 32 bit Photoshop for that reason, so I have them both installed.
    The only other consideration I can think of is that occsasionally a system might have a problem (with drivers or something) to where the 32 or 64 bit version will work better than the other.  Probably your best bet, assuming you're trying to minimize disk usage, will be to install just the 64 bit version and see how well it meets your needs.  You can always go back, uninstall, and install both (or the other).
    A "thinking out of the box" alternative...  Maybe you could consider getting a big new SSD drive to replace the drive you have, which will both kick performance up and give you a lot of extra space.  Of course, this costs $$$.
    -Noel

  • IPhoto 6.0.3 (293) Performance Worse Than Any Other Version

    Interesting. I have 9,990 photos and my iPhoto 6 just came to a crawl. I have 1.5GB of memory. Something is definitely wrong. When I start iPhoto it appears to behave normally for about 30 seconds.
    I've read many of the other speed/performance/slow threads in this forum. Here's what I've tried:
    I have all my Rolls collapsed. After doing a couple transactions (eg: scrolling up/down or editing one picture), iPhoto freezes.
    When this happens I'll open Activity Monitor and find my dual CPUs pegged at about 97%.
    I've rebuilt my library, my small thumbnails, my large thumbnails, loaded IPLM (iPhoto Library Manager) and rebuilt the library using its copy function, and I've even moved my library to another volume. Nothing has worked. I've even run Disk Warrior against my volume that hosts iPhoto, reset permissions and run Disk Utility and verified that the hard drive is OK. I've also removed my iPhoto preference file. Nothing has helped.
    I'm beside myself.
    I'm currently running a manual rebuild test (UGGGH!!!!!) to see it's a problem with my existing library's data (i.e. corruption?) or my instance of iPhoto.
    (NOTE: a smaller, newer library helps but I don't want to lose my meta data. That's, frankly, the most important feature of iPhoto for my wife and me.)
    I've never heard the fans on my Dual G4 PPC (mirror door model) run so loudly. Of course, I've rebooted several times too.
    If anyone has any other suggestions, please let me know. I have a backup but I feer the backup contains the same, assumed, corrupt files as my active volume.
    Dual 1.25GHz G4 PowerPC Mac OS X (10.4.6)

    G4Monster:
    Have you tried rebuilding the library with iPLM and it's File->Rebuild Library... menu option? That create a new, library, copying the files and rebuilding the database, etc.? Or is that what you meant by "using its copy function,"?
    If you find you have to create new libraries in the end, there is a way to preserve the comments and keywords. See Tutorial #1 on how to preserve them.

  • Is there any performance value to making a tablespace READ ONLY?

    In 9i, if I were to set a tablespace which only contains data to be reported over - is there any value in making it READ ONLY other than "safety" of the data? I am particularly interested to know if it can increase performance.
    Thanks

    When you say safety, you are partly correct, because data cannot be deleted, but it can be dropped, check the below SQL statements.
    With regard to performance, yes you will gain some. For better performance while accessing data in a read-only tablespace, you can issue a query that accesses all of the blocks of the tables in the tablespace just before making it read-only. A simple query, such as SELECT COUNT (*), executed against each table ensures that the data blocks in the tablespace can be subsequently accessed most efficiently. This eliminates the need for Oracle to check the status of the transactions that most recently modified the blocks.
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96521/tspaces.htm#922
    SYS@BABU> create tablespace test_ts datafile 'D:\ORACLE\ORADATA\TEST_TS1.DAT' SIZE 15M
      2  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
    Tablespace created.
    SYS@BABU> CREATE TABLE TAB1 (A NUMBER) tablespace test_ts ;
    Table created.
    SYS@BABU> BEGIN
      2  FOR A IN 1..10 LOOP
      3  INSERT INTO TAB1 VALUES (A);
      4  END LOOP;
      5  COMMIT;
      6  END;
      7  /
    PL/SQL procedure successfully completed.
    SYS@BABU> ALTER TABLESPACE test_ts READ ONLY;
    Tablespace altered.
    SYS@BABU> DELETE FROM TAB1;
    DELETE FROM TAB1
    ERROR at line 1:
    ORA-00372: file 6 cannot be modified at this time
    ORA-01110: data file 6: 'D:\ORACLE\ORADATA\TEST_TS1.DAT'
    SYS@BABU> truncate table TAB1;
    truncate table TAB1
    ERROR at line 1:
    ORA-00372: file 6 cannot be modified at this time
    ORA-01110: data file 6: 'D:\ORACLE\ORADATA\TEST_TS1.DAT'
    SYS@BABU> DROP TABLE TAB1 ;
    Table dropped.

  • Unable to perform update from OS X version 10.5.4 to 10.5.5

    Automatic updater downloaded the OS X 10.5.5 update and tried to install it but failed.
    The installation stalled at "Configuration Installation" with a small amount of blue showing in the progress bar. I left it for several hours but it got no further.
    Have I corrupt download or what?

    Please read these two support articles:
    http://support.apple.com/kb/HT2405
    and
    http://support.apple.com/kb/HT1569?viewlocale=en_US

  • HELP! Need to demonstrate "benefit" of loading 2nd version of CS on personal computer

    Hello!
    I manage a group of in-house designers who would like to use the second set of Adobe CS4 licenses on their personal computer. Problem is, since the "Company" is the "owner" of the licenses, they say we need to demonstrate how this benefits the Company.
    Okay, so as designers we know this benefits the company, but does anyone have any ideas how I "demonstrate" this? Has anyone encountered this situation before?
    Please HELP!
    Thank you!
    Brandy

    So you're saying that if we purchased more than 1 license, this would not be valid?
    Do you know where that info would be? I read through EULA and I've only been able to spot the section that states it's alright to have the license installed on a second computer or personal computer as long as both licenses aren't used at the same time.

  • Effect performance when using different IE version

    Dear Guys,
    When I use IE version 6.0.2800.1106 , after click some BW report in
    Portal , it is ok . logon to BW system , use SM04 , i can see one
    session that i have connected. After logout from portal , this session
    in SM04 will disappear.
    But I use IE version 6.0.2900 , first , i click a BW report in Portal ,
    there is one session in SM04 that i have connected. second , i continue
    to click another BW report in same Portal session , another session
    will generate in SM04 and old session in SM04 will not disappear.third,
    after i completely logout from Portal , these session in SM04 still not
    disappear.
    I don't know why? Would you like helpme to find it out?
    Thanks,
    Our Portal Platform : SP2 patch4 , J2eeEngine level 22

    I would try asking them if there are any known bugs in the new IE regarding session handling.
    Say that you have a web application which behaves differently (opens new session) depending on which browser version you are using.
    It might be a good idea to not initially mention SAP BW since then they might automatically conclude that its SAP's fault (which I believe is not the case)

  • How to use * / - + in a switch statement?

    So I wrote a calculator program (im just learning Java).
    As a beginner I used the (press 1 for +, press 2 for - etc) now I want to go back and actually parse the real symbils * / - + and use those...
    I am using JOptionPane library for getting numbers and stuff.. (inputting them as String then converting to doubles)...
    How do I go about the operator symbols though? I can grab as as string no problem.. but then what?
    Ive been trying to figure out how to make a switch statement.. but case *: doesnt work.. it doesnt like the *. Cant convert that to an into or a double (at least I dont think you can?) or is switch statement a bad way to go in this case?
    Any help would be appreciated. :)

    endasil wrote:
    I should mention that it's extremely easy to form a switch statement that wouldn't have a performance benefit, by making the options very sparse. Like this:
    int i = 1000;
    switch ( i ) {
    case 1:
    break;
    case 1000:
    break;
    }Since this is not going to be implemented as a branch table (given the values are so sparse) it probably provides no benefit.
    enums, on the other hand, will be extremely fast in a switch since their integer values (ordinals) are packed as tightly as possible, starting at 0, by the compiler.
    I wonder how Strings will fare in this regard. I suppose if the compiler mods the hashCode, it can pack the possible values fairly tightly. Also, I presume that the cases will have to be compile-time constants.
    But the weirdest part to me is that it'll be actually computing the hash code of Strings at compile time. At first I thought this was reasonable when I discovered that String.hashCode's result was specified by the JLS [http://java.sun.com/docs/books/jls/first_edition/html/javalang.doc11.html#14460]. However, following JLS 1, those sections have been removed. So where is the specification for Java libraries now? Is it simply the Javadoc (String#hashCode()'s Javadoc does specify the actual result of computation)?
    Hmmm... The cases have to be compile-time constants, so that means strings will be in the constant pool, so I guess the compiler will build the lookups based on something that won't change for a given compiled .class file, regardless of which JVM loads it.

  • SSRS : Performance related question

    Hi Frineds,
    I have below scenario
    I have one master Driver report (rdl) which has 7 sub reports. I have put expression on the visibility of those reports.
    well reports are running slow.
    Example of one sub report:: in this i have one big rectangle, in that rectangle i have different regular stuff(tablix, textboxes and images).
    Instead of 7 sub reports i can take 7 big rectangle and put the similar expressiion on the visibility of those big rectangles.
    the benefit is, there will be only one report(the master driver report dont need to call any report).
    The question is,
    Will this new approach gives any performance benefit ???
    Thanks In advance,
    Parixitsinh

    Hi Parixitsinh,
    In Reporting Services, each subreport instance is a separate query execution and a separate report processing task. So there are some advantages and disadvantages of using subreports within performance issue. The details information is shown below:
    Do use subreports when there are just a few subreport instances.
    Do not use subreports inside a group when there are many group instances. For example, to display a list of both sales and returns for each customer, consider using drillthrough reports. Consider whether you can write the query to join the customer with
    sales and returns and then group by the customer ID.
    Do use subreports when the subreport uses a different data source than the main report. If performance is an issue, consider changing the dataset query in the main report by using one of the following mitigation strategies:
    • Collect data in a data warehouse and use the data warehouse as a data source for a single dataset.
    • Use SQL Server linked servers and write a query that retrieves data from multiple databases.
    • Use the OPEN ROWSET capability to specify different databases.
    For more information about the report performance, we can refer to the following document:
    http://technet.microsoft.com/en-us/library/bb522806(v=sql.105).aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • Ideas for rejuvenating MacBook in order to optimize GarageBand performance?

    Hello,
    I am finally about to start recording my album, using a very basic kit of Garageband 3.0.4, an Audio Technica AT4033 mic, Edirol UA-25 USB audio interface and various acoustic and electronic instruments (drums, percussion, vocals, piano, guitar, old analogue synth, etc). Backing up everything as I go onto a Lacie Rugged external hard drive. There's going to be quite a few layers of sound involved (lots of harmonies) and I'll probably be using some of GarageBand's built-in effects such as reverb.
    The definite aim is to manipulate these admittedly modest resources to create a finished product that is as professional and high-quality sounding (once it has been mastered) as anything that you would hear in the charts.
    I have a majorly limited budget so cannot afford a whole new Mac laptop at the moment.
    My question is, how can I best tweak my existing late-2006(?) MacBook so that GarageBand performance will be as reliable, latency-free and quick as possible? I really want to avoid GarageBand crashing and losing valuable recordings and work.
    I already have a few ideas:
    --Have just ordered a Kingston 2GB RAM memory module kit (KTA-MB667K2/2G) to increase my RAM from 1GB to 2GB
    --I've repaired Disk Permissions and done a full cleanup using CleanMyMac
    --Was thinking about transferring everything except perhaps the Home Folder to a large external Firewire hard drive to free up maximum internal HD memory as I currently only have 35.3 GB available
    --Was also considering buying a secondary used (& cheap) Mac laptop from eBay for everything else (internet, other work, itunes etc) and using my MacBook for GarageBand only, in order to increase the internal hard disk's lifespan
    I would welcome any and all comments, recommendations and suggestions.
    Many thanks in advance
    kris

    kristopher19 wrote:
    Thanks for your reply gjmnz. I had checked my mac's specs on macupgrades.com and also did the Crucial memory scan, and these sites recommended a maximum of 2GB in 2 x 1GB PC5300 modules, which I've now fitted. Do you think it could actually take a bigger RAM upgrade, though?
    I would not know without doing a check. I was just going by the spec's in your sig. I also have a white macbook 2Ghz with 667 Mhz bus speed and it is currently running 4GB Ram and can actually take 6. With the white mac books there can be a difference between the original advertised spec's and what they can handle. I used the link I posted above to discover what my MB can handle. It may be that my MB was late in the refresh cycle and yours was early. Perhaps double check with the OWC site in the link. As with the Ram, many of the MB's can handle much bigger and faster internal drives, even solid state. It has been my experience with PC's that by using bench marking software to check the average read/write speeds of internal and external drives that the best spent money is on your internal drives when comparing against using USB2 and FW400 external drives of speeds up to 7200rpm.
    Edit: Just did a check on the 2,1 macbook and here is a quote from the OWC site
    MacBook2,1 (All) - Install up to 4.0GB total memory, uses up to 3.0GB.
    Note: Although limited to physically utilizing a maximum of 3GB, there is a performance
    benefit due to 128 Bit addressing when 4GB( 2GB x 2 Matched set) is installed.
    Just double check yourself...
    Message was edited by: gjmnz

  • Question Isolation level on performance

    If I have a question for report, and this report is not really realtime necessary(for example, munites delay is fine). in order to improve performance for report, allow query to get dirty data even there is a transaction lock on on table. So if change isolation level from 3 to 1 or 0, any big performace gain?

    Not sure what the functionality of the report is but you may also look at using the "readpast" query hint which allows skipping rows which on which incompatible locks are being held.
    Dirty read should be carefully evaluated/explained with the users of report since sometimes they will approve dirty read for performance benefit but won't really understand the implication. Just from my book of experience.
    warm regards,
    sudhir

Maybe you are looking for