Horrible scopes utilization

AGAIN ... rather than nifty new features, I'd like SOME pressure on Adobe to fix major things that need a "touch" to get them working in a basically USEFUL manner.
Such as ... Speedgrade's scopes. Look at the enclosed pic, a screen grab of my "4 scopes/10bit" layout I currently see.
Does ANY of this look like it should be this way?
Let's start with the histogram, upper left. Currently showing negative values on two colors. If I adjust the "base" in any way, those move somewhere ... but they DO NOT stay where they are when I release them after I release the control ... and note, I'm using a track wheel. I rotate the wheel or outer ring to where I like, then hit a button to 'release'. The placement of the scale always moves on release, can be either up OR down, so I have to just move it about until they end up sorta where I want them after the 'bounce'. Time consuming and LESS THAN OPTIMAL workflow?
Next ... lower left quadrant, the vectorscope. Note there are no "graticules" visible on this one, as there are at least on the other four. Which means ... no target boxes for those using the nice spendy calibration cards to use to calibrate their footage, you have to guess where the various colors are on the scale, and of course ... no "skin-tone" line available either. Stuuupid.
Next ... whole right side. Note how weird the parade and waveform scales are set up? They compress the scales rather than showing the work you've done on a full-height area like every other scope ever made. But they don't even do it the same on both scales. Note that on the top one, the black area has shortened to match the number-scale figures, to about HALF the height it should be. And the other one ... the black box is still full height, but the NUMBERS have shrunk to half-box height.
What the Hell? Not ONE of these scopes is displayed in any sensible or easily usable way.
Neil
(and yes, I've done mutliple wish-requests on this)

Drystonewall,
I am sincerely sorry but I do not see exactly what you clearly felt about Shooternz's post. I look back through his comments and would have to agree with the points about CS5.5->CS6, that was (to my view also) a definite version-step up. Did everyone using CS5.5 need or directly benefit from the changes involved? Of course not. Yet it still was over-all a version upgrade. In our studio, with three to five computers running Photoshop (a couple of them laptops running under their "comapanion" desktop) we chose whether or not any particular work-area needed any particular upgrade. Our sales room was nearly always "two steps back" ... mine was normally a version back, and the wife's was frequently spanking new. Those operating in the video/film sphere obviously made the same decisions.
Adobe changed their operating model at the end of the CS6 software cycle. The switch was to provide software as a service rather than a product. Conceptually rather than having coders work on making a large enough change at one whop to justify a lot of people shelling out big bucks for the "new do", it was a change to simply having coders constantly working on upgrading the software and sending out multiple upgrades as they are ready to go, rather than holding them and making a new bundle.
Hasn't been a perfect roll-out, as everyone has noticed. And there've been some major foul-ups in there. Most of which they've attacked and fixed ... yet a few big ones remain such that some entire "houses" have felt a need to work elsewhere for the time being. That's definitely not a perfect roll-out. Clearly for some people with their particular mix of OS, footage formats & codecs, other involved software used on steps of their pipeline capture->delivery and a particular format/codec they need to deliver, the Adobe stuff isn't a usable option. At this time.
The only one they've said they're not working at bringing into the "works" is footage with a variable bitrate such as from some smart-phones and computer-monitoring software. And everything else they're working on as  they can solve one problem and move onto the next. Considering the range of options they're trying to build software for, it is a formidable task. Everyone from the relatively simply demands I (also a one-man shop as far as video goes) that I place on things to major tv delivery houses and clearly they want to become more usable in the "film" work also.
And their path forward for software is as clearly stated for over a year, the "CC" model as they call it. As a ... convenience? ... they are still selling CS6. For now. They are putting no develpment into it, and again, have been very clear about that.
If CS5.5 floats your boat just fine, that's great ... enjoy it. No reason you shouldn't and especially if it does everything you need hey, the cost is great, right?
Their first iteration of Speedgrade was ... interesting. A very different GUI still than any of the CS products as they'd just acquired it and hadn't had a chance to modify the GUI into something that someone used to Adobe software would even recognize. Even the "full" CS6 iteration of Speedgrade ... which was the first one I purchased ... was still complicated for me and extremely difficult to work with. Sending anything to it from PrPro was a royal pain, and getting it back into PrPro even looking like it did in Sg was almost impossible for me. So I did most of my 'grading' (such as it was) in PrPro, a little actually in AfterEffects using Shian Storm's "ColorGhears" LUT's.
When the CC7 stuff arrived, I tried it out ... and the first couple weeks were crazy. It behaved differently than the ol' "6" version, and their explanation of the differences in working within a DL timeline didn't really explain HOW differently this was designed to work. After getting another <dot>-x down the road and learning how it's supposed to work, and not trying to make it work the old way ... wow, it's been a game-changer for me. I don't have to know squat about transferring projects, transcoding, edl's, rendering out and importing back ... NONE of that.
It's not part of my workflow. I just DL a timeline over to SG, and immediately work on that timeline. Send it back to PrPro and immediately I'm back in PrPro working away. Between the time saved going to and from, the total lack of a need to move files from one to the other or normally even re-link anything, and the vastly easier project management from not having all those extra files to worry about, and know which ones to dump and which to keep, that's several hours a week.
Would good professional quality scopes help? OH BABY YEA.
Will they finally get them? I expect so ... they've added a bunch of other things and fixed a lot of other things, I think they'll get to it eventually. But even if they don't the ease and speed of my workflow saves me a ton of time and hassle.
Time is money ... and I save a bunch of money using this software. And I've probably wasted a ton of your time if you've actually read the whole thing I've written. If you do take the trial I would heartily suggest getting a look by asking for some of the users around here as to how to utilize this in your workflow. Give a typical project or perhaps an overview of a couple typical projects, and ask how folks would work them. Shooternz and Jim Simon and several of the others will probably chime in with ideas ... especially after one of them does and of course, could use some ... well, not correction precisely, but ... guidance? ... in their choices.
I don't suggest this because in any way shape or form I think you don't know your work ... in fact, I'm assuming you are a LOT more knowledgeable in many areas of video work than I. The reason I suggest this is the Adobe suite has become something very different than we had before. It takes a different workflow pattern ... and it's not always obvious what that is. I don't know how many people have come on here frustrated because they can't figure out how to do "X" and are frustrated that there's quite a work-around needed to do that anymore.
Now, about the third post on the thread someone will ask them why they want to ... as simply that step isn't needed anymore. Sometimes people actually can step back and go ... oh ... and try the "new" way, and go away happy. Sometimes there isn't a way to do what they want without a stupid workaround. Yea, those are damn frustrating.
And sadly sometimes people are simply so used the "fact" that ALL such workflows require you to do "x" and they don't want to hear any stupid comments about the way it MUST be done. Such as ... I'm so amazed at the number of people who transcode for their NLE, then insist on transcoding to get the best codec for grading ... then transcode again for output. Are there a couple formats of either file/camera-capture that say, PrPro can't work with well or at all? Yep. But much of the transcoding people right about involves files PrPro handles just fine these days ... and can DL over to SG without need of anything. So ... why mangle your footage, take the time both out of PrPro and into Sg, and take up far more disc space?
I dunno ... but some people insist.
There are for so many things a different way of doing them now. And especially for this one-man shop, the Adobe stuff saves me far more time than the CC model costs me compared to any other way I could be regularly working and upgrading my software.
But as I'm very aware ... everybody's mileage varies.
Neil

Similar Messages

  • Resource Capacity Utilization Question

    Hi, Please let me know if the following scenario is a possibility.
    We currently maintain our capacity utilization at 90% for any resource and the same number is carried over to APO with CIF. The requrement we have is to set this utilization to 100% for the rolling current date+3 months and 90% outside the 3 month horizon. I know that the dates can be specified explicity but I don't know if the rolling 3 month is possible. Please let me know if this can be achieved via config.
    Thanks in advance.

    Dear ,
    I would request you to post the thead in  SDN APO forum  .
    I do not think so  there is a scope of this requirement  in SAP R/3 but APO PPDS  can give you a solution for the same .
    Regards
    JH

  • TopLink Essentials: Using spring session scope on the EntityManagerFactory

    Hi,
    In one of our projects we are trying to utilize the 2nd level cache of TopLink Essentials. The server code is using the Spring (2.0.7) and is deployed to a web container. We are using a Java client and serializes every object between the client and the server.
    When a client is started a lot of "static/common" data is read from the server. The first client started will therefor pay some extra performance cost, but additional clients should hopefully be able to perform better due to the cache. Most of the static data is accessed not using JPQL and should therefor not bypass the cache.
    If we configure the EntityManagerFactory using default Spring scoping (singleton) it seems like we are not able to utilize the cache. We can see from the log files that each time a client is started a lot of SQL is executed towards the database (Oracle).
      <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="dataSource" ref="dataSource" />
        <property name="jpaVendorAdapter">
          <bean class="org.springframework.orm.jpa.vendor.TopLinkJpaVendorAdapter">
            <property name="showSql" value="false" />
            <property name="generateDdl" value="$repository{toplink.generateDdl}" />
            <property name="database" value="$repository{toplink.database}" />
          </bean>
        </property>
        <property name="jpaProperties">
          <props>
            <prop key="toplink.weaving">static</prop>
            <prop key="toplink.logging.level">FINEST</prop>
          </props>
        </property>
      </bean>When we changes the scoping to spring session the behavior is very different. Then we can see that the first client generates a lot of SQL towards the database and pays a startup cost, while subsequently clients seems to be able to utilize the cache.
      <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" scope="session">
        <aop:scoped-proxy />
        <property name="dataSource" ref="dataSource" />
        <property name="jpaVendorAdapter">
          <bean class="org.springframework.orm.jpa.vendor.TopLinkJpaVendorAdapter">
            <property name="showSql" value="false" />
            <property name="generateDdl" value="$repository{toplink.generateDdl}" />
            <property name="database" value="$repository{toplink.database}" />
          </bean>
        </property>
        <property name="jpaProperties">
          <props>
            <prop key="toplink.weaving">static</prop>
            <prop key="toplink.logging.level">FINEST</prop>
          </props>
        </property>
      </bean>Having read the documentation of the spring session scope I'm having a hard time explaining this behavior. As I understand it the EntityManagerFactory will be stored in the HTTP session resulting in a new EntityManagerFactory for each session. I cannot understand how this may result in that we are able to utilize the 2nd level cache? What are the relationship between the EntityManagerFactory and the 2nd level cache?
    Is this an accepted way of configuring the EntityManagerFactory or will there be a downside?
    I hope someone are able to explain this behavior for me:-)
    Best regards,
    Rune

    Hi Rune,
    To understand the shared cache behavior you actually need to understand more about what TopLink Essentials does than what Spring does. When a new factory is created, TopLink Essentials actually just proxies the server session with a factory instance, so the same server session is used. This is why you are seeing the same cache used across multiple factories of the same session.
    In the first case, if you are not using JPQL then what are you using to load the data and why are you thinking that it will not bypass the cache?
    Using a factory instance for each session is not what I would recommend doing as there are some additional costs associated with establishing a factory (even though the session already exists). The first way should be the correct way, I am just not sure what the circumstances are that are causing your cache to not be warmed. You may want to post more details about that so people can better help you out with that angle.
    -Mike

  • Paying a CO account and credit utilization?

    I can't seem to search correctly to find my answer, bit of back story:I am young and stupid and thought things would dissappear... Awful excuse I know, until I go to apply for a student loan with a cosigner and am denied because original borrower -->me  was too low. So began the process.. Im not too horrible but have let one account spiral to a 4k charged off debt, lost hope and closed my eyes.Charged off for 5 months and continuously repeating CO. No CA on official experian report but have recieved calls myself and once from husbands aunt who says some girl called me looking for you. Made arrangements to pay debt completely off within three weeks (2700 initial payment and 1300 this week) Why I didnt do this sooner (a valid excuse this time) I had a car accident the month account became delinquent (couldnt work) and after recieving settlement from lawyer a few weeks before I finally had the funds.   SO my question, Because the debt is continuously being reported on a closed account but a CO by original creditor, no CA activity shown exvept few soft pulls, does this balance of 4k continue to hurt my overall credit utilization? My reasoning is yes because it is still credit utilized but not credit available (since it is closed). I have maintained ~$0 on another account with 5k limit,  Therefore, with this reasoning, my score is so low because I am using 100% if not 110% of my credit every month? I realize there are other factors etc but is this part of the problem? Searching google Ive just seen that paying charged off debt is stupid and you should let it die for 7 years so now Im out of almost third of my settlement...Also sidenote, phone 'check' retrieved by CA was made out to CITI BANK for full funds, not to CA if that helps.  Thank you in advance sorry for the long letter

    You're actually smart for settling or paying in full. For one, you avoided a possible judgment from (OC or CA). Two, it will no longer effect your utilization. You will no longer have to worry about calls or CA's haunting you. You did the right thing settling the debt as its recently new.

  • Best practice for DHCP Server 2008 utilization of IP Addresses

    I am currently using 85% of addresses on my DHCP server running windows 2008 Server. Does microsoft recommend a particular percentage (%) of its utilization before building another scope? Or what is the industry's best practice or microsoft's
    recommendation to build another scope?

    Hi,
    As far as I know, there is no standard for the
    usage of DHCP scope. Just make sure that the IP address pool isn’t exhausted.
    For the best practices of DHCP, please refer to the article below,
    DHCP Best Practices
    http://technet.microsoft.com/en-us/library/cc780311(v=WS.10).aspx
    Recommended tasks for the DHCP server role
    http://technet.microsoft.com/en-us/library/cc731392.aspx
    Hope this helps.
    Steven Lee
    TechNet Community Support

  • Scope of simulation in BPA suite

    Hi,
    What is the scope of simulation in BPA suite.
    * If we provide processing time and/or cost involved in each fuctional lock, then while simulation, we can get an analysis based on this.
    * Simulation can identify few weak points in your business process. But how is a weak point identified. Any rule available for this?
    Apart from this what is the significance of Simulation in BPA. Can it provide any other useful inference or suggestion, that can help improving the process.
    Thanks,
    Vishnupriya

    Hi Vishnupriya,
    "how do we find the critical path using BPA suite." You can perform a cost and/or a time analysis - it depends on your preferences. A disadvantage of the cost analysis is, that the cost attributes can only be defined for functions. It is not possible to specify relation-dependent costs (e.g. different costs if a function is performed by a manager or a clerk). Below I am concentrating on a time analysis.
    a) Identify bottlenecks
    After you performed the simulation you need to analyse the statistics. Each object type has its own simulation attributes - check the help section under "What simulation-relevant object types exist" to get further information. For comparison the following categories/attributes are very useful indicators to identify bottlenecks:
    * Processes (cumul.): Created processes, Completed processes, Process folders in static wait state and Process folders in dynamic wait state
    * Events (cumul.): Activations
    * Functions (cumul.): Process folders in dynamic wait state
    * Processes (det.): Throughput time (min.), Throughput time (max.)
    Hint: Increasing throughput times, high dynamic wait time sums and a high degree of utilization (for human resources) are indicators of bottlenecks.
    b) Identify modeling faults
    E.g.: If you have a gateway in your model and the "Process folders received" is always equal to "Process folders waiting" it means that the gateway cannot be passed. Normally it indicates a deadlock situation and you should check that each split in your model is properly joined. The issue can also be considered at the "Degree of activation", which will be equal to zero. Use the semantic check to ensure that your model-format is correct.
    "how to do this comparison"Compare the relevant attributes (times, costs) of your As-Is and To-Be models. You have several options to perform the comparison:
    a) create charts of both simulation runs
    b) check the simulation statistics of both simulation runs
    c) use the ARIS report functionality or create a user-defined report to extract the simulation results
    Best regards,
    Danilo

  • I get this error message quite frequently:: (TypeError: attempt to run compile-and-go script on a cleared scope) what does it mean and how do I stop it?

    I get this error message quite frequently:: (TypeError: attempt to run compile-and-go script on a cleared scope) what does it mean and how do I stop it? It doesn't happen always but it is often.

    Ever since the 6.0.2 update - every single day there's been something new going horribly wrong. For the past couples weeks it has been the browser hanging or crashing for no apparent reason and now this error hitting the browser screen over and over a hundred times a minute. I've used firefox since it began and NEVER had any issues previously. This 6.0 upgrade is a downgrade and is making me seriously look at Chrome (never IE - why I switched to firefox), though making the switch will be a pain due to all the apps I rely on and thousands of bookmarks that would need to be transported. Please mozilla fix this because using this browser has suddenly gone from seamless to frustrating pretty much overnight!

  • High time consumption and CPU utilization on EPM 11.1.2.1

    We are using a distributed environment for EPM system 11.1.2.0, in which, on system A, Foundation services and Planning have been installed and on system B, Essbase, Admin Services and Provider Services have been installed.
    The configuration for the two systems is as mentioned below :
    System A:
    intel Xenon CPU X7560 2.27 GHz (dual core)
    12 GB RAM
    System B:
    intel Xenon CPU X7560 2.27 GHz (dual core)
    8 GB RAM
    A business rule takes 15 minutes on being executed on version 9.3.1; whereas the time consumed on system 11.1.2.1 varies from 3 hours (mostly) to 10 minutes.
    This business rule aggregates 5 dimensions (1 dense and 4 sparse). It does not create any new blocks and intelligent calc is also set to off.
    Although the cache and memory values on the newer system are higher than the previous version, we fail to reduce the time consumption and CPU utilization. Please help us resolve this issue.

    Your issues is beyond the scope of a forum. I suggest you find an infrastructure person who has Essbase experience. Your machines also have less resources than the standard deployment guide recommends.
    This would appear to either be lack of physical memory or an issue with the IO speed of the disk subsystem which you are storing your Essbase data on.
    Regards,
    John A. Booth
    http://www.metavero.com

  • Wow. iTunes wifi sync on Windows XP is horrible

    I've given up using wifi sync.
    Many times a day, iTunes pops up a message saying it cannot sync to some phone or other. But, it never says which phone or what the problem is.
    At least once a week, the apsdaemon goes crazy and drives my CPU to 100% utilization. The only way to fix that is to reboot.
    I hope Apple is reading.

    Well, let's see. The fact that I'd have to update my video card. The week or so it would take to install Windows 7, reinstall all of my apps, and get all of the settings correct. Trading the relatively few problems I have now with XP for a whole new set of problems. The risk that some program, driver or piece of hardware I need won't be compatible with Windows 7.
    Trust me, I'm not alone. And I wouldn't be surprised if iTunes wifi sync was just as horrible on Windows 7.

  • Equipment / Resource Utilization:

    Hi Experts,
    This report should provide comparision of Standard V/s Actual Machine hours utilization for a particular resource or for all the resources utilized in a given period and its variance. If the variance percentage is more than 10% it should be highlighted in Red colour as it is an area having potential / scope for improvement. Could You please tell me how to this report.
    Thanks to all  in advance.
    Thanks & regards,
    Rajesh

    Hi Rajesh,
    you have to create an exception for this in the query designer..
    http://help.sap.com/saphelp_nw04s/helpdata/en/68/253239bd1fa74ee10000000a114084/frameset.htm
    regards,
    jai

  • Absolutely HORRIBLE Rendering of HD Footage to Web Gallery

    Here's a question for you all. How is it that I can have High Definition footage (Sony HDR-HC1 1080i) imported into iMovie and after editing I export it to Web Gallery and it looks absolutely atrocious? Scope it out yourselves and tell me what you think it could be. It most certainly IS NOT better than DVD quality!!!!
    http://gallery.mac.com/brian_green#100055
    I don't know what to do with iMovie anymore. I can't get it to render correctly in any situation, yet when I actually play the footage through the preview pane it looks great (so I know it wasn't an import issue). I certainly hope Apple gets an update to iMovie out soon so I can actually tolerate the results of it's horrible rendering. I like the new layout of iMovie but not at the expense of the quality of my High Definition footage.
    Please help with this if you happen to know some good tricks.
    Brian Green

    I've downloaded both movies..
    both have the same 'dimensions'/res 960x540, same codec, audio settings, framerate ...
    but:
    fireworks has 320KBit/sec
    woods has 3987KBit/sec ...
    that is a factor of >10 ...
    (aside, an excellent example, that the bitrate is the main factor in video quality, not fps nor res....)
    was the import from the Sony into a HiDef project? iM08 allows the 'setting' of projects to 'Full Quality' for HiDef...
    are you sure, you haven't incidently choosen 'mail' as export preset?
    finally, voodoo!! , .. did iM08 export your preview clip, not your project.. cause, downsizing the firework to size of a stamp.. look quite terrific

  • Capacity Utilization in LTP

    Dear Guys,
    How we can handle the Capacity Utilization in Long-Term Planning, when the capacities of Short-Term\Mid-Term Planning are still not defined properly.
    Would you please clarify it for me. Thank you in advance!

    Hi Payam
      LTP is carried out to assess the long term capacity requirement with the projected level of utilization, future plans for example whether expansion of capacity is needed or not, since the capital investments are needed for capacity expansion. short and mid level planing can be carried out to assess the utilization,  and whether expansion of  shifts ( Extended working hrs) without making any capital expenditure.
      short and mid planning are not mandatory for LTP. any planning can be carried out at an time. but the scope and purpose may be different
    Hope this Helps Reward our points
    SK

  • Oracle Coherence increasing Swap Utilization

    We are using Oracle Coherence on linux servers. However, we noticed that because of Coherence processes running, often our swap utilization % increases too much, sometimes becoming more than 98%, even touched 100% a few times.
    Once we kill all the coherence related processes, then it becomes normal.
    Is there any way we can make coherence processes to only use a particular size of Swap space ?
    Currently increasing swap space is not in our scope.
    Please suggest.
    Edited by: user7761515 on May 3, 2012 11:29 AM

    Hi,
    We are using Oracle Coherence on linux servers. However, we noticed that because of Coherence processes running, often our swap utilization % increases too much, sometimes becoming more than 98%, even touched 100% a few times.
    Swapping itself (1%-100%) is not a good sign and should be avoided by ensure that you have sufficient memory such that you are not making active use of swap space on your machines. The active usage of SWAP space will have significant performance degradation.
    Is there any way we can make coherence processes to only use a particular size of Swap space ?Manage your memory by allocating heap using -Xmx for Coherence JVMs. You need to ensure that the sufficient RAM memory is available on the server for Coherence JVMs and other operating system processes and do not consume all the RAM.
    To temporarily set the swappiness, as the root user echo a value onto /proc/sys/vm/swappiness. The following command will set swappniess to 0:
    echo 0 >/proc/sys/vm/swappiness //To set the value permanently, modify the /etc/sysctl.conf file.
    Hope this helps!
    Cheers,
    NJ

  • Throttle CPU utilization during software updates

    Are there any settings to throttle CPU Utilization during software update installation on client computers?  I am deploying updates to test systems currently and noticed that the CPU utilization was being pegged at 100% during the install.  After
    the updates installed, the system returned to normal.

    Also note that ultimately, it is the Windows Update Agent that is installing the update(s) so it's not really within ConfigMgr's scope of control and would happen no matter what method you chose to use to update these systems with Windows Updates which
    leads me to believe these systems have perf issues to begin with. Are they VMs?
    Jason | http://blog.configmgrftw.com

  • Difference between JSP Scopes

    Hey people,
    My name is Lucas Abrao, 22, Brazil. I'm a Java Programmer and now, I'm starting learning JSP. I' ve been having some easy doubts 'cause the books I have boght are a little bit advanced for that. I wanna know if you could tell me the differences between the JSP Scopes Pages. I mean, I wanna know the difference between application, session, request and page scopes and when do I utilize each one of them. Could you give four simple code example with each one of them at work?
    Thanks you all.

    Page Scope is the smallest scope, and is a representation of the PageContext object for your JSP. Each JSP has it's own page scope, so objects in the page scope of one JSP are not accessible to other JSPs. This is sorta like making a private variable in Java.
    Request scope is the next smallest scope, and is represented with the JSP's request object. All JSPs and servlets that share a request share the request scope. For example, if I have a JSP that forwards to another page, and that second page includes a third JSP page, then all three pages are in the same request, and can share objects through the request scope. A special note here, is the response.redirect(), will create a new request, unlike forwards and includes. Also note, a new request is made every time the user gets a new page, be it by clicking a link, a button, or some JavaScript call.
    Session scope is the next lowest scope, represented by an HttpSession object. All requests from the same user are in the same session (unless you or they make it otherwise). Each user has his own session. If you want data to be referred to through multiple pages, after each page is displayed and the user requests a new page, then store the information in the session. Note, in order for sessions ot work, the user must have cookies on, or you must re-write the URLs. Take a look for maintaining sessions for more help on that.
    The widest scope is the application scope, represented by the ServletContext object. All users share the same values of the application scope, as there is only one made for the web application. If you have some static material that all users should be able to access, then put it in the application, but be carefull. Each user will see the changes other users make, and certain threading issues can occur if not handled properly. So application scope is usually best used for Read-Only data.

Maybe you are looking for

  • System crashes due to onbord ac97 soundcard !!!

    hi i have 745 Ultra 6561 and have problems with my onbord soundcard drivers ??? at any time i don't even have to use the soundcard i can get a bluescreen that refers to the wdm driver for the soundcard and some times it stands driver irq more or less

  • Video Podcast Won't Play

    Hi all, have a 1st gen 5G iPod 60G Only until these few days I (re)started to subscribe to some podcasts, including video podcasts like the new NGC World Music, NASA videos, Happy Tree Friends, etc They all play absolutely fine on iTunes on my mac, b

  • Workflow Translation issue

    Hi, We have maintained the decision texts  for User decision step types in both German and English language,to enable the user to see the user decision text in the logon language, we have done this through the path EXTRAS->Translation->Translation. H

  • Syncing Issues with Leopard

    I have one of the new iPod classic 80GB model running the 1.1 firmware, and I have it synced with my iTunes (7.6) library on my MacBook running 10.5.1 Recently, within the last week or so, whenever I plug in my iPod to sync (automatic syncing, not ma

  • Error in Account Identification work center - Type = SYSTEM_ERROR, ID = ERR

    Dear Experts, I am getting an error in the initial Account Identification tab like "Type = SYSTEM_ERROR, ID = ERROR_SESSION_INIT, description = 16AA9A3937A9BB56E10000000A11447B". But surprisingly its disappearing when i give some input in any field.