Performa 500

Well let me tell you a shortened version of my story. I bought a performa 500. It worked for a while. I was playing number munchers (the best game ever) and it crashed and hasn't been right since. I don't have any disks that would have came with it originally. The performa turns on and then it shows the icon with the floppy disk and a question mark. I bought off of ebay a restore disk or utility disk for it and those let me go on the computer but just in the utility part. I did the restore hard drive part but nothing is working. If anyone can help it would be much apreciated. Thanks.

technicolor76,
Where are you in the world? If you are in the northwest, we can get you some local resources.
I did not recognize the 500 designation so I checked the specs.
http://www.info.apple.com/support/applespec.legacy/performa.html
I assume that it is an all in one style. If you want to check my profile and email me, I can offer more detailed help.
I hope you did not pay too much. You did someone a favor to keep that machine out of a landfill. I am not at a machine with my bookmarks so hopefully someone will jump in with a game download site.
Ji˜m

Similar Messages

  • Connect imac to performa 500

    Am I able to connect a legacy Performa 500 to my Intel iMac to transfer files to external HD? Performa has an eternet connection.

    How easy this can be done depends a bit upon the operating system on the Performa.
    Typically, with System 7.5.3 or higher, Open Transport networking is possible, meaning that the TCP/IP control panel is active (the earlier classic networking uses MacTCP, which may make things more complicated).
    Because of the differences between the latest versions of Mac OS X and the old systems, do not expect any "built-in" file sharing to work.
    Instead, a good idea can be to use a small FTP server (NetPresenz is now free) on the Performa. One can then connect to this server from a dedicated FTP client (or a web browser) on a modern Mac (or PC). The physical connection would be over Ethernet cables (via the LAN ports of a normal router).
    If you need to install downloaded program files, one solution (providing that the Performa has an internal or external CD-ROM drive) may be to use a CD-R (in an ISO format, burn at a low speed) for transfers from a modern Mac or PC. Keep any downloaded files encoded (typically, MacBinary = .bin) until on the Performa. Use StuffIt Expander on the old computer to decode the files.
    Jan

  • Real Life Mac Mini Performance vs iMac

    Hi Guys, 
    I'm about to purchase my first Mac Mini, and I have question about Mac Mini's performance.  How is base Mac Mini compare against iMac 21.5 (i5 2400s) Base? I understand that iMac uses i5 2400s, desktop CPU, and benchmark number is obivously whole miles ahead of Mac Mini base.
    My primary use for Mac would be day to day computing with some Adobe web creative tools like Photoshop, In Design, Dreamweaver for checking web/graphic desinger's work.  I plan to study iPhone app development for fun, so some Xcode .
    I already have good 24 inch monitor,bluetooth keyboard and magic mouse from my last macbook pro setup.  Price difference between iMac and Mac Mini is about $500.
    In real life usage, with Memory upgraded 8gig, how much of difference would I notice?  Will iMac Base have significantly better performance ($500 difference)?
    Another quesiton is if I add SDD, will it void the warranty?  I understand that swapping HDD is okay, but I read somewhere saying that adding another HDD would actually void warranty....
    Thank you!

    I am an electrical engineer and do a lot of embedded firmware design
    and supporting software, mostly in Win8.1 virtual machine with Parallels.
    I also do a lot with Photoshop CS6 and some video work.
    When confronted with an upgrade decision, I ended up going with the 27" iMac
    and was glad I did.  Expanded it to the full 32GB and am using an SSD via Thunderbolt,
    I already had a Thunderbolt display, so I ended up with a very good workstation setup.
    With that 32GB of RAM, I am able to have OS X, a Win8.1 coding/debug virtual machine,
    and a Win7 software test platform all running at the same time and no noticable
    performance hit.
    Also, if developing large firmware/software, SSDs will speed the work up dramatically
    as these types of projects are quite file intensive.  The last project I was working on,
    I could build, download and start debugging an embedded app using Win8.1 on a VM
    before my coworkers were half way through building the app on their native Win laptops/desktops
    which had standard had drives.
    With that said, I still have 2 Minis in service.  My 2011 Mini Server (my former workstation) was
    repurposed to be my HTPC and my 2010 Mini has been re-assigned as my home iTunes/file server.

  • SQL stored procedure Staging.GroomDwStagingData stuck in infinite loop, consuming excessive CPU

    Hello
    I'm hoping that someone here might be able to help or point me in the right direction. Apologies for the long post.
    Just to set the scene, I am a SQL Server DBA and have very limited experience with System Centre so please go easy on me.
    At the company I am currently working they are complaining about very poor performance when running reports (any).
    Quick look at the database server and CPU utilisation being a constant 90-95%, meant that you dont have to be Sherlock Holmes to realise there is a problem. The instance consuming the majority of the CPU is the instance hosting the datawarehouse and in particular
    a stored procedure in the DWStagingAndConfig database called Staging.GroomDwStagingData.
    This stored procedure executes continually for 2 hours performing 500,000,000 reads per execution before "timing out". It is then executed again for another 2 hours etc etc.
    After a bit of diagnosis it seems that the issue is either a bug or that there is something wrong with our data in that a stored procedure is stuck in an infinite loop
    System Center 2012 SP1 CU2 (5.0.7804.1300)
    Diagnosis details
    SQL connection details
    program name = SC DAL--GroomingWriteModule
    set quoted_identifier on
    set arithabort off
    set numeric_roundabort off
    set ansi_warnings on
    set ansi_padding on
    set ansi_nulls on
    set concat_null_yields_null on
    set cursor_close_on_commit off
    set implicit_transactions off
    set language us_english
    set dateformat mdy
    set datefirst 7
    set transaction isolation level read committed
    Store procedures executed
    1. dbo.p_GetDwStagingGroomingConfig (executes immediately)
    2. Staging.GroomDwStagingData (this is the procedure that executes in 2 hours before being cancelled)
    The 1st stored procedure seems to return a table with the "xml" / required parameters to execute Staging.GroomDwStagingData
    Sample xml below (cut right down)
    <Config>
    <Target>
    <ModuleName>TransformActivityDim</ModuleName>
    <WarehouseEntityName>ActivityDim</WarehouseEntityName>
    <RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName>
    <Watermark>2015-01-30T08:59:14.397</Watermark>
    </Target>
    <Target>
    <ModuleName>TransformActivityDim</ModuleName>
    <WarehouseEntityName>ActivityDim</WarehouseEntityName>
    <RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName>
    <ManagedTypeViewName>MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity</ManagedTypeViewName>
    <Watermark>2015-01-30T08:59:14.397</Watermark>
    </Target>
    </Config>
    If you look carefully you will see that the 1st <target> is missing the ManagedTypeViewName, which when "shredded" by the Staging.GroomDwStagingData returns the following result set
    Example
    DECLARE @Config xml
    DECLARE @GroomingCriteria NVARCHAR(MAX)
    SET @GroomingCriteria = '<Config><Target><ModuleName>TransformActivityDim</ModuleName><WarehouseEntityName>ActivityDim</WarehouseEntityName><RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName><Watermark>2015-01-30T08:59:14.397</Watermark></Target><Target><ModuleName>TransformActivityDim</ModuleName><WarehouseEntityName>ActivityDim</WarehouseEntityName><RequiredWarehouseEntityName>MTV_System$WorkItem$Activity</RequiredWarehouseEntityName><ManagedTypeViewName>MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity</ManagedTypeViewName><Watermark>2015-01-30T08:59:14.397</Watermark></Target></Config>'
    SET @Config = CONVERT(xml, @GroomingCriteria)
    SELECT
    ModuleName = p.value(N'child::ModuleName[1]', N'nvarchar(255)')
    ,WarehouseEntityName = p.value(N'child::WarehouseEntityName[1]', N'nvarchar(255)')
    ,RequiredWarehouseEntityName =p.value(N'child::RequiredWarehouseEntityName[1]', N'nvarchar(255)')
    ,ManagedTypeViewName = p.value(N'child::ManagedTypeViewName[1]', N'nvarchar(255)')
    ,Watermark = p.value(N'child::Watermark[1]', N'datetime')
    FROM @Config.nodes(N'/Config/*') Elem(p)
    /* RESULTS - NOTE THE NULL VALUE FOR ManagedTypeViewName
    ModuleName WarehouseEntityName RequiredWarehouseEntityName ManagedTypeViewName Watermark
    TransformActivityDim ActivityDim MTV_System$WorkItem$Activity NULL 2015-01-30 08:59:14.397
    TransformActivityDim ActivityDim MTV_System$WorkItem$Activity MTV_Microsoft$SystemCenter$Orchestrator$RunbookAutomationActivity 2015-01-30 08:59:14.397
    When the procedure enters the loop to build its dynamic SQL to delete relevant rows from the inbound schema tables it concatenates various options / variables into an executable string. However when adding a NULL value to a string the entire string becomes
    NULL which then gets executed.
    Whilst executing "EXEC(NULL)" would cause SQL to throw an error and be caught, executing the following doesnt
    DECLARE @null_string VARCHAR(100)
    SET @null_string = 'hello world ' + NULL
    EXEC(@null_string)
    SELECT @null_string
    So as it hasnt caused an error the next part of the procedure is to move to the next record and this is why its caught in an infinite loop
    DELETE @items WHERE ManagedTypeViewName = @View
    The value for the variable @View is the ManagedTypeViewName which is NULL, as ANSI_NULLS are set to ON in the connection and not overridded in the procedure then the above statement wont delete anything as it needs to handle NULL values differently (IS NULL),
    so we are now stuck in an infinite loop executing NULL for 2 hours until cancelled.
    I amended the stored procedure and added the following line before the loop statement which had the desired effect and "fixed" the performance issue for the time being
    DELETE @items WHERE ManagedTypeViewName IS NULL
    I also noticed that the following line in dbo.p_GetDwStagingGroomingConfig is commented out (no idea why as no notes in the procedure)
    --AND COALESCE(i.ManagedTypeViewName, j.RelationshipTypeViewName) IS NOT NULL
    There are obviously other ways to mitigate the dynamic SQL string being NULL, there's more than one way to skin a cat and thats not why I am asking this question, but what I am concerned about is that is there a reason that the xml / @GroomingCriteria is incomplete
    and / or that the procedures dont handle potential NULL values.
    I cant find any documentation, KBs, forum posts of anyone else having this issue which somewhat surprises me.
    Would be grateful of any help / advice that anyone can provide or if someone can look at their 2 stored procedures on a later version to see if it has already been fixed. Or is it simply that we have orphaned data, this is the bit that concerns most as I dont
    really want to be deleting / updating data when I have no idea what the knock on effect might be
    Many many thanks
    Andy

    First thing I would do is upgrade to 2012 R2 UR5. If you are running non-US dates you need the UR5 hotfix also.
    Rob Ford scsmnz.net
    Cireson www.cireson.com
    For a free SCSM 2012 Notify Analyst app click
    here

  • Signatures at multiple positions in the PDF

    Signatures at multiple positions in the PDF. This is mainly a Verification & Validation document,
    where signatures will be applied for each Test case present in the document(4 to 5 signatures in a single page).
    We have observed that applying a signature for one place involves lots of steps and saving of file each time.
    This leads to lot of effort when we have to perform 500+ signatures for a single document.
    So do we have any mechanism to get the pre-stored signature with a single button?
    Note:
    We are using Acrobat 11(Adobe Reader 11)
    Steps that we are using for getting a signature are:-
    Precondition:(Steps involved in creating Signature)
    Steps involved in placing the signature:(It will be performed each time a signature is placed)
    Click on Place the Signature button
    Identify the location for the signature
    Enter the password
    Click on OK
    Save the file
    Thanks
    Deepak

    From the steps that you described it looks to me that you are using digital signatures. As TSN already noted digital signatures apply to the whole document, not to individual parts of it (like specific pages). The signature appearance exists just as a visual aid to make it obvious for the reader of the document that it is signed. It does not matter on which page a signature appearance is placed.
    You can automate the process of signing the document with multiple signatures with JavaScript if you know ahead of time which signing credentials you want to use for each signature, where to place it and with which name to save. To do that you need to be familiar with the JavaScript programming and Acrobat JavaScript API (it is part of the Acrobat SDK which you can download from adobe.com). You can also Google "Acrobat JavaScript API" for more info on specifics.

  • Lightweight laptop for Adobe Creative Suite 5 advice

    Hello,
    I originally posted this in the 'Downloading, Installing, Setting Up > Discussions' forum and have been sent here to see if anyone may be able to help; I need some advice and guidence on buying a new laptop that will run Creative Suite 5 Design Premium. I have a desktop which I use for the majority of my design work, but I need something portable for tweaking and presenting whilst on the move.
    The key thing is I need it to be lightweight as I have joint problems so preferably under 1.9kg (I know that usually the smaller then the less powerful) and with a good as possible battery life. I mainly use Photoshop, Illustrator and some InDesign, working to real size (usually under and up to A3, sometimes larger)
    I've found two possibilities; can anyone guide me if either of these will run comfortably for what I need?
    HP Pavilion dm4-2101ea £449
              Featuring Genuine Windows® 7 Home Premium (64-bit)
              2nd generation Intel® Core™ i3-2330M processor
              4GB memory for top system performance
              500 GB Hard Drive for storing all your digital files
              35.5 cm (14.0") High-Definition LED-backlit BrightView Display
              Intel® HD Graphics
              Finished in Dark Umber, Brushed Aluminum
              1 year, pick-up and return, warranty. Optional upgrade to 3 years accidental damage cover for peace of mind
              Featuring Wi-Fi, Bluetooth, Integrated Webcam & Microphone
              Includes HP Entry Carrying Case & HP Wireless Mobile Mouse
    Sony VAIO VPCSB4C5E (VAIO SB 13 series) £614
               Configured
                Intel®Core(TM)i5-2450M,2.50GHz
                Genuine Windows® 7 Home Premium
                320 GB Serial ATA (5400 rpm)
                DVD disc drive
                8 GB 1333 MHz DDR3-SDRAM
                33.7 cm LCD, 1366x768+webcam
                No Wireless WAN
                No Long-Life Battery
                No security features
                HD digital camera
                Office 2010 Starter + Acrobat
                No protection
                McAfee Online Backup trial
                No Imagination Studio Suite
            Also Included
                Backlight keyboard
                AMD Radeon(TM) HD 6470M 512MB
                English (QWERTY)
                1 AC Adapter
                USB 2.0 x2 + USB 3.0 x1
                Wireless LAN (802.11abgn)
                HDMI(TM) output
                Bluetooth® 3.0
    If the HP one will run the suite ok then I'll go for that one, as a little kinder on the budget. I've compared them myself but I'm not sure if the HP would be sufficient or if the sony would be wasting money when somethign cheaper will do the job? Though at the moment Sony are doing a deal for 8gb RAM for the inclusive price of 4gb, but only until Monday...
    Any advice would be really appreciated, thank you!

    Go to HP's web site and you can customize the laptop to your needs and keep an eye on the cost as you add/subtract features. The thing to keep in mind is, forget about tech support, plan on paying someone local to fix it, or fix it yourself or buy new. As Tech support is no good in my eyes.
    For me the price out weights the tech support other wise I would pick someone else.
    If you need good tech support, then well, not much out there, maybe Dell, but I have seen laptops that break under normal or almost normal wear and tear. Their laptops must be sent in, but I guess tech support is OK. Never owned a Dell laptop, just seen friends with them.
    I have heard rumors of bad tech support from Sony, but thats just rumors, so I can't say how good or bad they really are.
    As for a lighter laptop, you could go with a solid state hard drive, they don't hold as much, but they are faster and much lighter than their counter parts.
    Get as much ram as you can afford, you will be glad you did, when the time comes. I have already ran out of ram editing a video with 16 GB of ram, so you never have enough.
    Anyway, good luck on your purchase, I am sure, you will enjoy it, which ever decision you make.

  • Does moving SP code from DB to App Server help scale an application?

    We have an application developed using Oracle 10g Forms / 10g DB? All our processing is done using SPs. So they all run in the DB server. Even our Inserts/updates/deletes to a table are handled by SPs.
    The site with the maximum simultaneous users (i.e. concurrent users) is one with 100 concurrent users.
    We have prospective customer whose requirement is 300 concurrent users. Our application won't be able to handle it since the DB server is a single processor server with limited memory.
    One suggestion was move the SPs to the App Server by moving them to the Form. Since OAS has a PL engine they will run in the App Server and hence remove the workload of the DB.
    I don't buy this. My point is, even if SPs are moved to the app. server still the SQLs will run in the DB server, right?
    So what is the advantage?

    christian, I just modified the original post thinking nobody will reply since it's very long. Thanks a lot for the reply. For others and myself also here is my original question.
    I have a problem like this: Take this scenario. We have a TELCO app. It is an E-Business Web Application (i.e. Dynamic Web Site) developed using ASP.Net/C#. App. Server is IAS and DB is Oracle 10g. IAS and the DB reside in 2 servers. Both are single processor servers.
    The maximum simultaneous user load is 500. i.e. 500 users can be working in the system at one time.
    Now suppose 500 users login at the same time and perform 500 different operations (i.e. querying, inserts, updates, deletes). Now all 500 operations will go to the App Server. From there the C# code will perform everything using Oracle stored procedures (SP). I.e. we first make a connection to the DB, SP is invoked by passing parameters, it will perform the operation in the DB, send the output to the App. Server C# code and we will close the Oracle connection (in App Server. C# code).
    Now, the 500 operations will obviously have to wait in a queue and the SQLs will be processed in the DB server.
    Now, question is how does CONNECTION POOLING help in this situation?
    I have been told that the above method of using DB SP to perform processing will make the whole system very slow since all the load of the processing has to borne by the DB Server and since DB Operations involve disk I/O it will be very slow. They say you cannot SCALE the application with this DB Processing mode and you have to move to App. Server processing mode in order to scale your application. I.e. If the number of users increases to 1000 our application won’t be able to handle it and will get very slow.
    What they suggest is to move all the processing to the App. Server (i.e. App. Svr. Memory). They also say that CONNECTION POOLING can help even further to improve the performance.
    I have some issues with this. First of all to get all the data to the App server memory for each user process from the DB will not only require disk I/O, it will also involve a network trip. That will obviously take time. Again the DB requests to fetch the data will have to wait in the DB queue. Then we do the processing in the App. Server memory and then we have to again write to the DB server which again will require a network trip and disk I/O. According to my thinking won’t this take MORE TIME than doing it in the DB server??
    Also how can CONNECTION POOLING help. In C# we OPEN a connection ONLY just before run the SP or get the data and we close the connection just after the operation is over.
    I don’t’ see how CONNECTION POOLING can improve anything?
    I also don’t see how moving data into the App. Server from the DB Server can improve the performance. I think that will only decrease performance.
    Am I right or have I missed something?
    Edited by: user12240205 on Nov 17, 2010 2:04 AM

  • How-to create auto-updating fields

    Hello, I'm having a slight problem with user controlled fields on the front panel. I'm using controls in LabVIEW 8.0, and my problem is that when a user enters a new value into a numeric control field, the control does not update and pass that value throughout the rest of the code unless the user presses the <enter> key while the field is still selected. Seems as though there should be an easy fix.
    Thanks in advance.

    Well to specify, what I'm trying to do is create a program for students to use in and education lab, just for simplicity it would be preferred that they not have to press enter at the end of every string of numbers written into a field. The controls that I'm using are governing for loop parameters; the for loop activates with a mouse down event structure. The problem is that if a student enters that they'd like the loop to perform say 500 iterations one time, and they wish for 300 the next; if they don't press enter after changing the value of the numeric control on the front panel the loop will perform 500 iterations again. I have tried using various types of event structures on the numeric control such as "key up" and "key down" however the same problem arises that they enter key must be pressed in order for the value to update.
    I've tried this both on the experiment VI, and a new blank VI with various control-to-indicator set-ups.

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Monitoring Performance - HTTP 500

    Hi all,
    I have a system load testing and have been getting more than a 2:1 ratio of
    Error 500's:
    Error 500--Internal Server Error
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.5.1 500 Internal Server Error
    The server encountered an unexpected condition which prevented it from
    fulfilling the request.
    I am watching my default execute queue and it appears fine (a large
    percentage of idle threads).. My default execute queue has 50 threads with
    60% dedicated to socket connections (I have been playing around with this
    number) - My test environment has 40 simultaneous threads making HTTP
    requests with a 1 second delay between requests..
    Other than viewing the results of my test, how can I detect when WLS is
    returning an error and not processing a request (through JMX)?
    The monitoring behavior I expected when I saw this error was a huge
    utilization of threads in the execute queue - equal to 60% * 50 threads =
    30, but I consistently have 35 - 40 threads idle.. Any ideas?
    Furthermore I don't see any other database errors being generated in my log
    files (and I have my servlet entry-point coded to throw a ServletException
    when that occurs - so I should see that).
    One last note, in my tests I have a request that does not invoke any
    Servlets or JSP (strictly HTTP) and it is not generating this error..
    Thanks in advance for your help!
    Steve

    Look in the weblogic.log file for a stack trace. That should tell you exactly
    what is going on.
    Mike
    "Steven Haines" <[email protected]> wrote:
    Hi all,
    I have a system load testing and have been getting more than a 2:1 ratio
    of
    Error 500's:
    Error 500--Internal Server Error
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.5.1 500 Internal Server Error
    The server encountered an unexpected condition which prevented
    it from
    fulfilling the request.
    I am watching my default execute queue and it appears fine (a large
    percentage of idle threads).. My default execute queue has 50 threads
    with
    60% dedicated to socket connections (I have been playing around with
    this
    number) - My test environment has 40 simultaneous threads making HTTP
    requests with a 1 second delay between requests..
    Other than viewing the results of my test, how can I detect when WLS
    is
    returning an error and not processing a request (through JMX)?
    The monitoring behavior I expected when I saw this error was a huge
    utilization of threads in the execute queue - equal to 60% * 50 threads
    =
    30, but I consistently have 35 - 40 threads idle.. Any ideas?
    Furthermore I don't see any other database errors being generated in
    my log
    files (and I have my servlet entry-point coded to throw a ServletException
    when that occurs - so I should see that).
    One last note, in my tests I have a request that does not invoke any
    Servlets or JSP (strictly HTTP) and it is not generating this error..
    Thanks in advance for your help!
    Steve

  • Query performance is slow for only 500 objects- CMS timeout

    When a report is initially scheduled, I only have the job id as a key to determine the status of what happens to the report.
    Yes, the job id is not indexed - but the others are. What other fields can i use? I am only retrieving 500 records.
    SELECT
         SI_ID, SI_NAME ,....
    FROM
         CI_INFOOBJECTS
    WHERE
         SI_INSTANCE=1
         AND SI_RECURRING=0
         AND SI_OWNER='CustomApp'
         AND SI_UPDATE_TS > '2010-01-01'
         AND SI_NEW_JOB_ID (1,2,3.........500)
    and it times out after 9 minutes :
    com.crystaldecisions.sdk.exception.SDKServerException: CMS operation timed out after 9 minutes.
    cause:com.crystaldecisions.enterprise.ocaframework.idl.OCA.oca_abuse: IDL:img.seagatesoftware.com/OCA/oca_abuse:3.2
    detail:CMS operation timed out after 9 minutes.
    I can increase the timeout - but that would not be the solution.
    TIA,
    JM

    SI_NEW_JOB_ID isn't something you filter on.  That field isn't even guaranteed to persist beyond the lifetime of the InfoObject object instance that you've scheduled.
    How SI_NEW_JOB_ID works is this:  you schedule a InfoObject object instance.   That creates a new InfoObject object instance, with a new SI_ID.  So that you'd be able to determine this SI_ID, this value is placed in the SI_NEW_JOB_ID property of the original InfoObject.    So you'd schedule the InfoObject object instance, then immediately read back the SI_NEW_JOB_ID from that instance, to find out the SI_ID for the scheduled InfoObject instance.
    SI_OWNER does a double lookup.  It retrieves the SI_OWNERID for the instance, then looks up the name for that User.
    So filter on SI_ID to identify any scheduled instances, and filter on SI_OWNERID to filter on owners.
    SI_ID and SI_OWNERID are indexed properties.
    Sincerely,
    Ted Ueda

  • VSM module: A9K-VSM-500 CGN / IPv6 Performance

    Hi,
    I was wondering if there is more information regarding the VSM performance numbers similar to what is listed on the ISM datasheet for CGN and IPv6 migration features.
    http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/data_sheet_c78-663164.html
    ISM:
    ● Maximum 20 million simultaneous sessions for CGN and DS-Lite, and 15 million simultaneous sessions for NAT64.
    ● Maximum 1 million sessions set up per second
    VSM:
    Appreciate your response.
    thanks
    prashanth

    Thanks for the update!   Do you think the CGN numbers will be posted soon?
    regards
    -P

  • I have a 500 GB hard drive and a 1TB Time Capsule running on a MacBook Pro.  It was all working well until the MacBook went in for a repair a week or so ago.  Since then, TC will not perform a backup;  instead, it says the backup is too large for the disk

    Since having my MacBook Pro repaired (for a video problem) Time Capsule returns the following message:  "This backup is too large for the backup disk. The backup requires 428.08 GB but only 192.14 GB are available."
    I notice that there is also a new sparse bundle.
    Since TC has my ONLY backup (going back about 4 years) I am reluctant to wipe it and start over fresh as I am afraid of losing files. 
    Is there a way of dealing with this?
    I am using Snow Leopard 10.6.8

    The repair shop likely replaced a major circuit board on your MacBook Pro, so Time Machine thinks that you have a "new" computer and it wants to make a new complete backup of your Mac.
    You are going to have to make a decision to either add another new Time Capsule....or USB drive to your existing Time Capsule....and in effect start over with a new backup of your Mac and then move forward again.
    For "most" users, I think this is probably the best plan because you preserve all your old backups in case you need them at some point, and you start over again with a new Time Capsule so you have plenty of room for years of new backups.
    Or, as you have mentioned, you have the option of erasing the Time Capsule drive and starting all over again. The upside is that you start over and have plenty of room for new backups. The downside is that you lose years of backups.
    Another option....trying to manually delete old backups individually....is tricky business....and very time consuming. To get an idea of what is involved here, study this FAQ by Pondini, our resident Time Capsule and Time Machine expert on the Community Support area. In particular, study the pink box.
    http://web.me.com/pondini/Time_Machine/12.html
    Once you look through this, I think you may agree that this type of surgery is not for the faint of heart.  I would suggest that you consider this only if one of the other options just cannot work for you.

  • Pavilon 500-319na can I install a second hard drive

    pavilon 500-319na second hard drive
    Options 
    ‎01-02-2015 09:30 AM
    Operating System: Windows 8.1
    I bought an HP Pavilion 500 - 319na desktop in summer and want to install a 2nd hard drive. There are spare connections on the board and the power cable to the internal hard drive has a spare connection. From what I read it needs to be SATA 3. I have read conflicting advice on this board.  Some say yes, some say no.
    OK, there seems to be only one bay (a poor oversight by HP) but provided I find a way of securing another drive is there any reason why it should not work.
    From what I read it should be max 2Tb's.
    I want a definitive answer please!
    Win 8.1 64 bit.
    This is the hard drive I plan to buy.  The spec says XP, Vista, Win 7, but am I right in thinking it should work with win 8.1 
    http://www.pcworld.co.uk/gbuk/components-upgrades/internal-hard-drives/3-5-inch-hard-drives/toshiba-...
    This question was solved.
    View Solution.

    As I said to you in the other cross posting (desktop hardware):
    You can obtain 3.5" hard drive cage from here for $11 shipped.  Granted it has a 120mm fan and space for 4 drives, but for the price, you can mod/cut all you want.
    Now your other issue/question:  You will want a SataIII 6GB/sec drive 2TB, 7200 RPM.
    I am a volunteer. I am not an HP employee.
    To say THANK YOU, press the "thumbs up symbol" to render a KUDO. Please click Accept as Solution, if your problem is solved. You can render both Solution and KUDO.
    The Law of Effect states that positive reinforcement increases the probability of a behavior being repeated. (B.F.Skinner). You toss me KUDO and/or Solution, and I perform better.
    (2) HP DV7t i7 3160QM 2.3Ghz 8GB
    HP m9200t E8400,Win7 Pro 32 bit. 4GB RAM, ASUS 550Ti 2GB, Rosewill 630W. 1T HD SATA 3Gb/s
    Custom Asus P8P67, I7-2600k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX660 2GB, 750W OCZ, 1T HD SATA 6Gb/s
    Custom Asus P8Z77, I7-3770k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX670 2GB, 750W OCZ, 1T HD SATA 6Gb/s
    Both Customs use Rosewill Blackhawk case.
    Printer -- HP OfficeJet Pro 8600 Plus

  • Free RAM - a dif in iBook performance between 25M and 1G?

    Hi...I keep several programs running on my iBook 1.33 with 768 onboard. Activity Monitor shows that at its lowest, my free RAM dips to around 25 or so megs when I'm streaming video from a news site.
    My question is whether or not there is a difference in the Book's performance when I have 25 megs of free RAM....or 500 megs? Or being "in the green" means that there is simply enough...being "greener" won't make a difference...
    I know that more is always better, but I'd like to know if more is necessary.
    The answer will determine whether or not I switch out the 512 for a 1G.
    I'd appreciate any feedback...Thanx!

    In those instances where chip RAM is needed (and not in a video-intensive
    situation where you can't upgrade VRAM, and regular RAM is not shared)
    the chip RAM is faster and more readily is available to priority applications
    and the OS X system itself. And Virtual Memory (VM) is slower, due to
    the fact that is derived by the computer having to read-write to/from the
    internal hard disk drive, and that is a slower and less-direct process.
    Another way to speed up a limited upgrade option computer would be to
    pay to have (or be brave and suffer any consequences, and DIY) the
    internal hard disk drive replaced with a faster spin-rate new drive which
    also has a larger buffer - and - more free space. This and the chip RAM
    could make the computer act more like a faster model computer. At
    least until the hard disk drive gets fragmented and or more than 3/4 full.
    There are more than a few things one can do to enhance the performance
    of the hard to upgrade iBook G4 (or iMac G4) since you can't change the
    CPU or make the system bus work any faster. Bottlenecks aside, a few
    items that can be upgraded, along with a regimen of routine maintenance
    can help almost any computer not pushed beyond its limits, to work better.
    Even with a hard disk drive only 75% full, it can be more sluggish and
    waste processing and swap-file cycles (moving data bits as VM to/from
    the hard disk drive when taxing the limited resources of the computer)
    if the hard disk drive has never seen much maintenance. If you use an
    external FireWire enclosed hard disk drive, and learn how to clone the
    whole iBook's drive contents over, then be sure the clone-copy can boot
    the computer, before proceeding; you could use the disk utility to wipe
    the drive and use the zero-overwrite option, totally erasing and then to
    reformat the drive again, to clear any low-level issues and defragment
    the drive, (plus pull any seldom used saved items off the computer,
    to free-up hard disk drive space, for the system to use as swap & VM)
    you could also reclaim some of the original illusions of speed now lost.
    {As the computer's OS gets more and more parts, updates and also
    any application and associated files to sort through, it will run slower;
    VM also this adds into the mix, with a fuller and older hard disk drive.}
    In reference to: ' replacing an iBook G4's hard disk drive? ' you may wish to
    read links here: http://www.applelinks.com/index.php/forums/viewthread/142/
    In reference to bootable clones of OS X systems:
    http://www.bombich.com/software/ccc.html
    Minor to major background maintenance, preventative, can be helped
    through the use of this utility interface tool; it can help the OS X and
    your computer generally run a bit better. I use OnyX's 'automation'
    selection and also have this utility's preferences set for it to restart
    by itself after it runs all of the checkboxed items in this set. For this,
    see: Titanium Software - OnyX: http://www.titanium.free.fr/pgs/english.html
    Also, About Disk Utility's Repair Disk Permissions (& 'repair disk' from
    the booted installer's version of Disk Utility; research this further.)
    http://support.apple.com/kb/HT1452
    Sometimes, even just repairing those disk permissions can help; and
    the OnyX tool can run that, as part of the Automation sequence; but
    it should be run more often from D.U. than you'd need to use OnyX.
    Troubleshooting permissions issues in OS X (and using Disk Utility)
    http://docs.info.apple.com/article.html?artnum=106712
    There is a relationship between RAM, free HDD space as Virtual Memory,
    system maintenance, disk drive health, and other interrelated details.
    Good luck & happy computing!
    edited 2x to add links.

Maybe you are looking for

  • Video Chat not Working Error -8? Please help

    i'm trying to talk to my cousin in Florida and I'm in Pennsylvania. Every time i start the video chat or he starts it it says communication error and it says i didn't respond and -8 error. I'm on a Macbook using airport, comcast, and wireless router

  • Japanese Display problem in Turbo Linux8.0 and Redhat Linux7.1

    Hello Experts My Problem is Font my_Font=new Font(Dialog , 0 , 12) is displaying japanese chatacters properly in windows 2000 , J2SDK1.4.1 when i run the program in Turbo Linux8.0 , Redhat Linux 7.1 Dialog font displays japanese as junk characters. P

  • Install Mavericks downloded from my Apple Id colleague

    Hi guys, A colleague give me Mavericks on a USB key. Cool, no need to downloader and worse anyway it is free. I created my bootable USB drive with DiskMaker X. I start a Clean Install to install a small 80 GB to see if I like it before to update my s

  • Selection scren problem after upgrading from BW 3.0 to 3.5

    hi, We recently upgraded from 3.0 to 3.5. We haev a workbook with 8 queries and we used to get only one selection screen for all the queries(we one slection query which has all the variables of other queries) on refresh of all queries.But now after u

  • Set AS-PATH prepend by matching RT value during import

    Hi, Can some one please explain how to match RT value and set AS-PATH prepend while importing the routes in to a VRF ? I know AS-PATH prepention can be done with export-map when exporting routes along with setting a RT value. But for some reason this