Implications of having a high CRL publication interval

Hello
We have a 2 tier internal CA structure
One of my regular tasks is to check the CAs for expired certificates or expired CRLS
The Root CA as well as the issuing CAs have CRL published intervals set to 6 months for both the Full and Delta CRL list (these are offline)
Their respective CRLS have all but a handful of revoked certificates
The Subordinate CA seems to do all of the work relating to issuing and revoking certificates. Its CRL is quite large and has an interval set of 1 Week for the Full CRL and 2 hours for the Delta
Today I checked the CRL and it was set to expire this Sunday
What would be the implications if I let this CRL expire?. Would it renew / republish itself automatically?
I published it manually just in case; but it got me thinking about what the implications would be if I set the renewal period higher for the CRL?
We have a regular task of powering on the offline CAs and re-publishing their CRLs / certificates before they are set to expire. I suspect that the online servers would be able to do this themselves?

it depends on a number of factors and each approach has its own pros and cons. For example, by having a long-living CRL, you reduce network traffic used by clients to download and fetch CRLs. However reaction time (to recognize certificate as revoked)
is slow. For enough long time revoked certificate will be accepted by clients as valid. By having short-living CRL you reduce reaction time and clients more timely determine recently revoked certificate as revoked. However, this approach increases CRL traffic,
because CRL is short-living and shall be downloaded by clients more frequently.
As a general practice, offline CAs (usually, root and policy) that issue certificates only to other CAs and never issue certs to end entities may have long-living CRL. About 6-12 months, because CA revocation is something unlikely because of high security
measures (strict physical and remote access, HSM and so on). Online CAs that issue certificates to end entities should have short-living CRLs, because client certificates are less protected and revocation is not something unusual. Default value for Windows
CAs is 1 week. You should think this like a start point and configure CRL lifetime comparable to this value.
Vadims Podāns, aka PowerShell CryptoGuy
My weblog: en-us.sysadmins.lv
PowerShell PKI Module: pspki.codeplex.com
PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
Check out new: SSL Certificate Verifier
Check out new:
PowerShell File Checksum Integrity Verifier tool.

Similar Messages

  • What are the security implications of having JAVA running on my Mac Book Pro?

    What are the security implications of having JAVA running on my Mac Book Pro?

    Java on the Web (not to be confused with JavaScript, to which it's not related, despite the similarity of the names) is a weak point in the security of any system. Java is, among other things, a platform for running complex applications in a web page, on the client. That was always a bad idea, and Java's developers have proven themselves incapable of implementing it without also creating a portal for malware to enter. Past Java exploits are the closest thing there has ever been to a Windows-style virus affecting OS X. Merely loading a page with malicious Java content could be harmful.
    Fortunately, client-side Java on the Web is obsolete and mostly extinct. Only a few outmoded sites still use it. Try to hasten the process of extinction by avoiding those sites, if you have a choice. Forget about playing games or other non-essential uses of Java.
    Java is not included in OS X 10.7 and later. Discrete Java installers are distributed by Apple and by Oracle (the developer of Java.) Don't use either one unless you need it. Most people don't. If Java is installed, disable it — not JavaScript — in your browsers.
    Regardless of version, experience has shown that Java on the Web can't be trusted. If you must use a Java applet for a task on a specific site, enable Java only for that site in Safari. Never enable Java for a public website that carries third-party advertising. Use it only on well-known, login-protected, secure websites without ads. In Safari 6 or later, you'll see a lock icon in the address bar with the abbreviation "https" when visiting a secure site.

  • Having really high pings, packet loss and been on ...

    Hi. I've been on to BT 4-5 times in the last few weeks. Everytime its the same thing I do all their checks and run the speed test. Its the same thing over and over. When i try to explain to them the problems i,m having they said your speed is within the acceptable range. I,m at a loss with dealing with BT. The customer server is terrible.
    Basically the problems i,m having is really high pings. Even doing the BTwholesale speedtest my pings was usally in the 30ms range now they are 70 on average now. I play alot of online games. And Everything is unplayable. I know there is definitely something wrong. Either with the routing or some of the nodes on bt.
    Today I was trying to play Diablo 3, the lag spikes was huge every 2-3 seconds jumping to 1-2k ms latency. I ran some winMTR tests and i,m having alot of packet loss. I tried to explain that to BT on the phone. But its like the dont have a clue what i,m talking about or wont attempt to help me with the issue. All game companies usally tell me ISPs are always willing to help with these problems. But BT seem to refuse to help with latency ping issues, just your speed is fine etc .
    I,ve also noticed people in the BT forums saying something about G.INP and people needing HH5 Type B. Can someone explain to me what this is or if its something happening to my cabinate. I,m using the BT HH5 type A. From what i see the about the G.INP sounds like the issues i,m currently facing.
    Can some Mod please help me. I,m really losing hope with BT.

    Hi Guys.
    Still facing these issues. Is BT not going to do anything about it? Here is a WinMTR test to a game server i,m playing.
    |------------------------------------------------------------------------------------------|
    | WinMTR statistics |
    | Host - % | Sent | Recv | Best | Avrg | Wrst | Last |
    |------------------------------------------------|------|------|------|------|------|------|
    | BTHUB5 - 0 | 1132 | 1132 | 0 | 0 | 10 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | 217.41.216.109 - 7 | 901 | 843 | 18 | 30 | 394 | 20 |
    | 213.120.158.234 - 7 | 909 | 853 | 21 | 25 | 51 | 23 |
    | 31.55.165.151 - 7 | 905 | 848 | 20 | 25 | 51 | 23 |
    | 31.55.165.109 - 7 | 905 | 848 | 21 | 25 | 49 | 23 |
    | 109.159.250.180 - 7 | 897 | 838 | 20 | 25 | 63 | 23 |
    | core1-te0-13-0-16.ilford.ukcore.bt.net - 7 | 913 | 858 | 27 | 34 | 57 | 33 |
    | peer6-0-9-0-22.telehouse.ukcore.bt.net - 6 | 917 | 863 | 27 | 32 | 56 | 32 |
    | 166-49-211-240.eu.bt.net - 7 | 913 | 858 | 28 | 32 | 59 | 28 |
    | 213.248.82.249 - 7 | 909 | 853 | 0 | 33 | 93 | 29 |
    | ldn-bb2-link.telia.net - 7 | 909 | 853 | 28 | 36 | 127 | 70 |
    | adm-bb4-link.telia.net - 7 | 909 | 853 | 32 | 37 | 76 | 34 |
    | adm-b4-link.telia.net - 6 | 917 | 863 | 34 | 38 | 65 | 35 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    | No response from host - 100 | 227 | 0 | 0 | 0 | 0 | 0 |
    |________________________________________________|______|______|______|______|______|______|
    217.41.216.109 seems to be an issue with this. Getting packet loss of 7% and the the ping shoots up to almost 400 while the average is around 30.
    I,ve tried everything BT have told me, i've phoned loads and loads of times to seek help. I dont know what else to do. Pings and latency is horrible. And i dont just mean little lag. Every online game is unplayable. Webpage loading slowing or broken images .

  • PKI CA CLUSTER CRL PUBLICATION FAILURE

    After configuring PKI cluster , I am not able to publish CRL, i am seeing the error below when i try to publish CRL
    Event log error
    Event ID 74
    Active Directory Certificate Services could not publish a Base CRL for key 1 to the following location on server DC.goryeal.com: ldap:///CN=PKI100A(1),CN=pki100p,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=goryeal,DC=com. 
    Directory object not found. 0x8007208d (WIN32: 8333).
    ldap: 0x20: 0000208D: NameErr: DSID-0310020A, problem 2001 (NO_OBJECT), data 0, best match of:
    'CN=PKI100P,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=goryeal,DC=com'
    I tested my cluster using the commands below and it seems to be configured correctly
    C:\Users\administrator>certutil -config   pki100p\pki100a -ping
    Connecting to pki100p\pki100a ...
    Server "PKI100A" ICertRequest2 interface is alive
    CertUtil: -ping command completed successfully.
    C:\Users\administrator>certutil -config   pki100p\pki100a -pingadmin
    Connecting to pki100p\pki100a ...
    Server ICertAdmin2 interface is alive
    CertUtil: -pingadmin command completed successfully.
    C:\Users\administrator.GORYEAL>certutil -getreg ca\crlpublicationurls
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\PKI100A\CRLPublicationURLs:
      CRLPublicationURLs REG_MULTI_SZ =
        0: 65:C:\Windows\system32\CertSrv\CertEnroll\%3%8%9.crl
        CSURL_SERVERPUBLISH -- 1
        CSURL_SERVERPUBLISHDELTA -- 40 (64)
        1: 79:ldap:///CN=%7%8,CN=pki100p,CN=CDP,CN=Public Key Services,CN=Services,%6%10
        CSURL_SERVERPUBLISH -- 1
        CSURL_ADDTOCERTCDP -- 2
        CSURL_ADDTOFRESHESTCRL -- 4
        CSURL_ADDTOCRLCDP -- 8
        CSURL_SERVERPUBLISHDELTA -- 40 (64)
        2: 0:http://%1/CertEnroll/%3%8%9.crl
        3: 0:file://%1/CertEnroll/%3%8%9.crl
    CertUtil: -getreg command completed successfully.

    it is known issue. When you steup ADCS cluster and when you renew CA certificate with new key pair, first CRL should be published manually. This can be done by running "certutil -dspublish -f crlfilename.crl". this will create new entry under cluster resource
    name and after this CA server will be able to publish files there.
    And do not forget that all CAs in the cluster must have write permissions on cluster resource name container under CDP container.
    My weblog: en-us.sysadmins.lv
    PowerShell PKI Module: pspki.codeplex.com
    PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
    Check out new: SSL Certificate Verifier
    Check out new:
    PowerShell FCIV tool.

  • Implication of having serialization profile

    Can anybody tell me that why  serialization profile is used in material master? It somehow serves the same purpose as done in Batch Management. So what's the advantage of having serial number.Moreover serial number has its constraints like uniqueness at plant level; thus plant to plant transfer of a material cannot be done due to this.So why not use Batch management instead?

    Hello
    Serial number profile is used to assign the serial number to an individual prodcut during a busines stransection.It can be Good receipt  , delivery for e.g.
    in Batch management you assign a batch number to a lot say for 100 ea.. Serial number profile will ask to put a unique serial number to each of these 100s in a specific batch..
    It is used generally in products where the indiviual identification is required for a tracking purpose
    Hope it helps..pl reward points if find this useful
    BR
    Sumit

  • How to offline an Enterprise Root CA

    For internal PKI, I'm a big fan of using Enterprise vs. Stand-alone, for simplicity and ease of management. The problem is, I just can't find definitive answers on how to properly offline it. Most people say to not bother, and their justifications
    are vague and nebulous. My Enterprise CAs are NOT DCs. I've given this a lot of thought, and these are the things I think need to be considered...
    If you take the Enterprise root CA offline, you'll need to consider three things:
    1. Change the Enterprise root CA's CRL publication interval to be longer than the periods for which the Enterprise root CA will be offline, and also probably disable delta CRLs on the Enterprise root CA for simplicity and ease of management. When you do
    boot the Enterprise root CA, be sure to publish a new CRL from it into AD.
    2. Make sure the Enterprise root CA isn't needed for anything but:
     a. The initial, one-time loading of the root certificate into AD for automatic distribution to clients by ADDS.
     b. Creating certificates for the subordinate/issuing CAs.
     c. Publishing the Enterprise root CA's CRL to AD for reading by the clients.
    Is there anything else the Enterprise root CA needs to be online for?
    3. By default, every computer account password expires every 30 days. This won't be a problem because when you boot the Enterprise root CA, it'll just change its computer account password if it has expired.
    So, having said all of that, should I offline the Enterprise root CA? If not, why?

    On Mon, 17 Feb 2014 08:14:20 +0000, Daniel L. Benway wrote:
    The real question is whether or not I can or should shut down the Enterprise root CA after it has published the root certificate to AD, after I've created the sub/issuing CAs, and after I've published the root CA's CRLs to AD and changed the root
    CA's CRL intervals to appropriate values.
    Brian did answer your question. A PKI is all about trust, and the root of
    that trust is the private key material of the root CA. The reason one
    deploys a standalone, offline root CA in the first place to is to reduce
    the possibility of an attack against the root CA's key material and the
    accepted method to reduce that attack surface is to ensure that the root CA
    is never attached to a network. That does not mean attach it to the
    network for a while and then periodically afterwards, never means
    never. The minute you attach the root CA to a network, you've reduced the
    trust level and once a trust level is reduced, it cannot be increased
    without redeploying.
    Brian and I have both seen the argument that an offline Enterprise root is
    easier to manage than an offline Standalone root and in practice, that
    simply isn't the case:
    1. Publishing the root CA certificate and CRL of an Enterprise root is, as
    you point out, automatic, however, transferring the certificate and CRL via
    removable media and then using certutil, given the infrequency of those
    operations is a trivial procedure. Operationally you gain very little by
    using an Enterprise root here, and taking advantage of the automatic
    publication requires that the root be put on the network which defeats the
    purpose of keeping it permanently offline in the first place.
    2. Since the only certificates that a root should be issuing are for SubCAs
    the advantage you get with an Enterprise root being able to use certificate
    templates is pointless.
    3. Any management functions or benefits you may be able to realize by
    having the root joined to AD are obviated by the fact that you're planning
    on having it offline and disconnected in the first place.
    The bottom line here is that any perceived advantage of having a offline
    root being an Enterprise CA as opposed to a Standalone root is defeated by
    the simple fact of having it attached to the network at any point in its
    lifetime. Security and trust trump ease of management in this case and as I've pointed out the actual ease of management versus the perceived ease of management is minimal at best.
    Paul Adare - FIM CM MVP
    Minds are like paragliders. They work best when open.

  • PKI Design / Migration - Questions

    Hello,
    Our organization currently uses a single-tier enterprise root CA for issuing certificates; We are growing and I would like redo this design in accordance with MS Best practice.  
    I just have a few questions:
    My original thought was to add an enterprise subordinate CA and decom the Enterprise root CA we currently have running but I am not sure if this possible or recommended as I am reading many articles stating that you should deploy a standalone root CA (offline)
    then create a enterprise subordinate CA for issuing certificates.  
    If this is the case how would I migrate servers / users over to use the new PKI infrastructure without causing service disruptions? 
    Thank You

    I just want to have some answers to give MGMT when they ask.
    Here's your own answer:
    ...and I would like redo this design
    in accordance with MS Best practice.  
    Brian gave you best practice.
    1 X standalone root CA (off line) – for security
    2 X issuing CA's - Enterprise subordinate CA:
    2X - for redundancy
    Enterprise – so that they use AD for certs, CRL, autoenrollment etc.
    I would also add that if you will not be revoking existing certs issued by the old CA, you may increase CRL publication interval on the old CA from default one day to 99 years. This basically leaves you with static CRL and static CDP web site (you don't
    need to publish CRL on the old CA each day).
    http://blogs.technet.com/b/pki/archive/2012/01/27/steps-needed-to-decommission-an-old-certification-authority-without-affecting-previously-issued-certificates-and-then-switching-all-operations-to-a-new-certification-authority.aspx

  • Perf. implications of high frame rate

    What are the performance implications of using a high frame
    rate in Flex. Our designers suggest that a frame rate of 60fps is
    smoother animation. They are creating Flex components with Flash
    CS3 and want to use the higher rate and use them in Flex.

    As you might expect, a higher framerate is going to consume
    more of the CPU. It's pretty much that simple. Some end user's
    computers may not be able to keep up with the framerate and their
    experience will be less smooth, probably less smooth than if you
    used a smaller framerate. I suggest you experiment.

  • CRL does not appear to be working

    Hello All
    Can someone please help me with the following question :)
    I setup a LAB where by I have a Windows 2003 R2 AD and Windows 2003 R2 Enterprise Root CA (will upgrade the lab the 2012 R2 later), and a Windows 2012 R2 IIS Server with a Test WEB Site. I also have a Windows 7 Client in the lab running the current version
    of Internet Explorer e.g. version 11.x
    I created and CSR > Cert from the CA and bound it to the WEB site, so I can go to
    https://TestSite  no  problem and if you click in the padlock it tells you about the certificate and who issued it etc.
    When I look at the Certificate I can see the CDP information and using ADSIEDIT I can see the relevant object in the AD configuration container. So again all looks fine.
    Next from the Certificate Authority MMC snapin on the CA I revoked the Certificate, after which I chose the option to publish a new CRL
    Problem:
    However when I use IE (or FireFox for that matter) I can still go to
    https://TestSite and there are no warnings about the Certificate having been revoked. I understand the client caches the CRL and does not always go and get a new one, when a new one is published but rather waits for its local cache to expire (a good
    reason for using OCSP I believe).
    Is the reason the WEB Brower still shows the page due to the fact the CRL cache on the client is still active and needs to be refreshed from the CRL in AD?
    if so how can I force the Brower to do this and it there a registry key (or GPO) to say do not cache CRLs?
    For better security I guess I would be better of setting up OCSP Server?
    If I setup OCSP, will I therefore need to remove the CRL and will IE/Firefox know how to check with OSCP when opening a WEB Site, or do I have to configure Windows/IE etc. via GPO to tell it to use OCSP?
    Sorry for the several questions, I would appreciate it if someone could help me out with the answers, thanks in advance.
    Thanks All
    AAnotherUser__
    AAnotherUser__

    Hi,
    You can purge the client CRL cache using certutil.exe
    Open an elevated command prompt and enter certutil -setreg chain\ChainCacheResyncFiletime @now
    I believe this works for Vista+
    As for your other questions. A CRL, and if configured a delta CRL is generated and published based on what the CA admin has configured as publication interval. The trick
    is find the sweet spot of what is an acceptable period of time where client could potentially hit a revoked certificate and still trust it vs the load you are willing to put on your CDP. To configure your CA to publish a CRL every day and a delta every hour
    run the following on the CA.
    certutil -setreg CA\CRLPeriodUnits 1
    certutil -setreg CA\CRLPeriod "days"
    certutil -setreg CA\CRLDeltaPeriodUnits 1
    certutil -setreg CA\CRLDeltaPeriod "hours"
    Although OCSP is real time, the source for OCSP is still the CRL and thus your CRL publishing interval is still key.
    Vista+ will prefer OCSP over CDP. The moment you setup OCSP and configure the AIA extension you are good to go. All certificates which are already signed will however miss this
    information and keep using the CDP until they are renewed or replaced.
    HTH Ben
    I wouldn't vote for this solution, because of the following:
    1) crypto cache clear is an ad-hoc solution, it works only once and it must be launched locally. In addition, this command required administrator permissions. You will have to log on to every station with admins credential and run the command.
    2) provided CRL validity settings aren't reliable. Maybe they are acceptable in a high-volume environments where certificates are revoked frequently. Otherwise it is a dumb vaste of resources. And even with these settings there will be a delay. Not 1 hour,
    but even more, because overlap will be added.
    3) this solution will require a high-available CRL distribution server and is expensive solution in most cases (server downtimes will cost).
    The gold rule here is to shut down the service if it is compromised somehow.
    My weblog: en-us.sysadmins.lv
    PowerShell PKI Module: pspki.codeplex.com
    PowerShell Cmdlet Help Editor pscmdlethelpeditor.codeplex.com
    Check out new: SSL Certificate Verifier
    Check out new:
    PowerShell FCIV tool.

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Slow Render with High-End System?

    Im currently working on a (in my opinion) high-end Windows system for video editing. The system is about 2 years old and has cost a fortune in that time. So Im expecting significantly better speed. So heres my problem:
    Im working primarly in Premiere Pro and After Effects. All the media I work with are as a video imported Jpeg Sequences. Often I have multiple Sequences (up to 7 or 8) overlayed and tweaked with Dissolves and plugins like Twixtor. I also use the Adobe Dynamic Link from After Effects to Premiere and vice versa. All the footage is currently in 1080p but in future I will want to render 4K. Im aware that a 4K workflow is probably a pain in the *** so Im surely going to edit offline with 1080p. However  I cant get any real-time playback with all my sequences. I ALWAYS have to render a preview to watch my edits. I dont know if Im just having too high expectations for my system, but Im kinda sure there has to be an issue for this lack of performance. Maybe the Dynamic Link is slowing my system down?
    System Specs:
    Model : HP Z400 Workstation 103C_53335X
    Mainboard : HP 0B4Ch
    System BIOS : HP 786G3 v03.15 10/29/2010
    RAM : 12GB ECC DIMM DDR3
    Processor : Intel(R) Xeon(R) CPU           W3530  @ 2.80GHz (4C 3GHz/3GHz, 2.13GHz IMC, 4x 256kB L2, 8MB L3)
    Chipset:
         HP X58 I/O Hub 2x 2.4GHz (4.79GHz)
         HP Core Desktop (Bloomfield) UnCore 2x 2.4GHz (4.79GHz), 3x 4GB ECC DIMM DDR3 1GHz 192-bit
    Graphic card : NVIDIA Quadro 4000 (8CU 256SP SM5.0 950MHz, 512kB L2, 2GB 2.81GHz 256-bit, PCIe 2.00 x16)
    Harddisks:
          4x WDC WD2002FAEX-007BA0 (1TB, RAID10/SATA600, SCSI-4, 3.5", 7200rpm) : 932GB (C:)
         Intel Raid 1 Volume (4TB, RAID, SCSI-4) : 4TB (D:)
         HL-DT-ST BD-RE BH10LS30 (SATA150, BD-RE, DVD+-RW, CD-RW, 4MB Cache) : k.A. (E:)
    Thank you very much in advance for your help and I apologize for any grammatical mistakes since english is not my main language.

    Valentin,
    I have always called a raid10 a solution for the paranoid in a hurry. It takes 4 drives to give you the capacity and performance of two disks, but gives you security by the mirroring.
    Before going into your specific situation, allow me to tell something about volumes and drives, because they can be confusing and at the same time they are very important for optimal performance of a system.
    Single disk, not partitioned is 1 disk = 1 volume to the OS
    Single disk, partitioned is 1 disk = multiple volumes (not a good idea BTW)
    Multiple disks in one raid array is Many disks = 1 volume
    Multiple disks in one raid array with partitions is Many disks = multiple volumes (not a good idea either)
    Each volume has a distinct drive letter for access.
    Partitioning is a thing of the past and should not be used at all on modern systems.
    You have to think about volumes more than about number of disks. In my current system I have 27 different physical disks but only 4 volumes. In the old one I have 17 disks and 5 volumes.
    Now that we are clear what we are talking about, volumes with distinct drive letters, we can address your situation.
    You have TWO volumes, C: (single disk) and D: (4 disks in raid10). Spreading the load across two volumes is more demanding and gives slower performance than using more volume, unless one or more volumes are very fast, as I tried to explain in a previous reply (remember Isenfluh/Sulwald?). If you add a SSD as you intend, you have increased the number of volumes to 3, which will definitely help performance, because SSD's are faster than conventional disks, the pagefile can be stored on the SSD, so all your performance will go up.
    Compare your setup with mine with rough estimated figures:
    Volume
    Valentin
    Harm
    Transfer rate Valentin
    Tranfer rate Harm
    C:
    1 HDD
    1 SSD
    125 MB/s
    450 MB/s
    D:
    4x Raid10
    1 SSD
    250 MB/s
    450 MB/s
    E:
    NA
    21x Raid30
    NA
    2,700 MB/s
    F:
    NA
    1 HDD
    NA
    150 MB/s
    These figures are indicative, but do show where the major differences are. In my experience disk setup is overlooked quite often, but has a huge impact on a system's responsiveness. It is the weakest (slowest) link in the chain, afterall and with your workflow, doubly so.
    But in your specific case there is something else, and that is your disappointing hardware MPE score. 100 seconds is extremely slow, even for a Quadro 4000. I would be quite normal to see a score around 8 - 9 seconds on such a system, well maybe around 12 - 13 seconds with your ECC memory, but 100 is way too slow. Some background services or processes are interfering with the hand-over from memory-GPU-VRAM-GPU-memory. This can be caused by a myriad of things, but a good starting point would be the BlackViper list of services to set to manual or disabled and taking a closer look at the processes running with Process Explorer. There should normally be less than 50 processes running.
    Hope this helps.

  • Is there a way to publish a high-res .ipa from Flash CS5.5 with AIR for iOS?

    Hello,
    I've developed an app for iOS in Flash specifically for iPad 3's. My client uses SOTI MobiControl as it's mobile device management solution. If I publish out of Flash using AIR SDK 3.1 or newer the app is created and looks high-res on the iPad3. When they try to add the app to SOTI for in-house distribution it errors with the error claiming that "CFBundleDisplayName" has been left blank.
    After having looked into it I've been able to find out that when publishing with versions of the AIR SDK older than 3.0, the info.plist file within the app package includes a CFBundleDisplayName. In versions newer than 3.0 there is no CFBundleDisplayName in the info.plist.
    Is there a way to publish for iOS from Flash utilizing an older version of the AIR SDK and having true high-res graphics that will look proper on an iPad3?
    Thanks in advance for any help.
    Phil

    It's not AIR that's lacking the key in the plist, it must be Flash Pro, although it's just running adt itself. I use AIR 3.4 and I compile via adt command line (because Flash Builder 4.6 still has not been updated to handle warnings from ANEs during compile, grr). I have CFBundleDisplayName in my plist file. Have you tried compiling on the command line?
    Here's an Adobe page with a bunch of examples compiling with ADT on the command line:
    http://help.adobe.com/en_US/air/build/WS901d38e593cd1bac1e63e3d128cdca935b-8000.html
    Being unaware of what in-house distribution system that actually is, I have noted that some people had previous issues submitting apps Flash compiles. Their solution was to rename the AppName.ipa to AppName.zip, extract the Payload folder, copy the AppName.app inside there out, then rename that AppName.app to AppName.ipa. Then it was accepted. Not sure if that old issue still exists.

  • High cpu usage when using flash player

    Hi,
    Just updated to Mavericks and now I'm having very high cpu usage while watching streams using flash player. CPU usage is somewhere around 110% and after a moment the stream starts to lag and fps is slowing down. I also tried using Chrome but with no difference. Flash player is the newest version 11.9.900.117.
    Any ideas on how to fix this?

    i think its the best to hit the report a problem button in safari, since its appears since i updated safari to 5.1.4! maybe they make a additional update

  • High Availability of BPEL System

    We are having a High Availability architecture configuration for the BPEL System in our production environment.
    The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
    We have 5 BPEL processes which are getting invoked within each other. For eg:
    BPELProcess1 --> BPELProcess2 --> BPELProcess3, BPELProcess4 &
    BPELProcess4 --> BPELProcess5
    Now when all the above BPEL processes are deployed on both the nodes of the BPEL server, how do we handle the end point URL's of these BPEL servers.
    Should we hardcode the end point URL in the invoking BPEL process or should we replace the IP address of the two nodes of the BPEL server with the IP address of the load balancer.
    If we replace the IP address of the BPEL server with the IP address of the load balancer, it will require us to modify, redeploy and retest all the BPEL processes again.
    Please advise
    Thanks

    The BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
    BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers.

  • MSI Pro E high IOH?

    As with most people, I seem to be having a high IOH temperature at idle. temperature was monitored with BIOS, and with HWMonitor.
    relevant hardware would be the MSI X58 Pro E, 920 d0, Corsair H50 heatsink, Antec P193 case with two 1200rpm slipstreams up front.
    Before I installed the OS, I decided to head into the BIOS to get the fans setup (had to make sure the impeller maintained full RPM), and noticed that the IOH was sitting idle at 82ºC. Decided to let it run about 20 mins before checking it again, and the IOH still stayed up around 82ºC. Took the side panel off to take a look around (no interference noticed, all air had a nice flow) for about 10 mins, and the IOH soared up to 92ºC. Apparently, the side fan works quite well in keeping the IOH down...
    Installed XP and ran a few long videos while watching core temps and IOH, stayed within 80-82ºC. Have not tried anything that was graphics intensive as of yet (keeping it offline until I can sort out the IOH).
    I have seen the threads with the modifications to attach an aftermarket heatsink for the IOH, but I'd rather use that as a last resort (thermal paste falls within this as well, I'd rather not have to extract everything out of the case)... Others appear to have used small fans, but I'm not quite clear how they were mounted (if anyone can shed light on that one, i'd be quite thankful).
    thanks for any help!

    Sorry for the double post, bu update on the system:
    went through almost ALL the computer stores beside campus (literally an entire street of computer parts), starting from the furthest to the closest... and the last one (closest to campus) has the spot cool. no other store had any 40mms in stock, OR the spot cool....
    Anyway, mounted the spotcool to the PCI holes off the back of the case, and it allowed me to push the spot cool right up against the heatsink (tried your mount location HU16E, temps rose slightly, but i already took off the Kama middle fan. plus it lightly scratched the paint off of the mobo in that corner, little metalic colour showing through... anything i can use to cover that back up?).
    Initial results appear to be good, with temps around 64ºC after about 15 mins. The fan's blue lights though, are of a slightly different shade compared against the motherboard + GPU cooler, but its not too noticable.

Maybe you are looking for

  • How can i run a java class file from shell?

    Hi all, I've a .class file named "File" that contains Main method, it is in the package "File2". How can I run it by shell command? PS: "java -cp . file" doesn't work it launch-> Exception in thread "main" java.lang.NoClassDefFoundError: File2/File (

  • Surprised by the horrible FF29 update, I tried system restore, and it completely obliterated my install. Is there an easy way to repair this?

    I wasn't pleased with the auto-update to 29. So I figured I'd roll back to 28 until I had more time to look into the various quirks and whatnot of the new update. Well, in my haste, I didn't really research the best way of doing this and opted for a

  • Active Directory Trusted Recon ends with NullPointerException

    Hi, I have installed  OIM 11.1.2.2.0 and AD connector version: ActiveDirectory 11.1.1.6.0. when i run "Active Directory Group Lookup Recon", I can see the groups created in "Lookup.ActiveDirectory.Groups". But when I tried to do "Active Directory Use

  • Glitches during Export from Illustrator

    I recently went from CS2 on an old G5 to CS5 on a newer, more powerful IMac. Now, when I Export from Illustrator to Photoshop, the resultant files have small glitches: horizontal white lines one pixel high by several or many pixels in width. The only

  • Insight not available

    I have Jdeveloper version 11g release 2 - 11.1.2.4.0 I'm using facelets, JSF 2 the code insight turns on and off all the time, if I have XML errors it doesn't work, I correct them and then it starts working...but suddenly it stops without any action