Why does ENET-232 driver use 98% CPU?

I have an ENET-232/4 device running under Windows XP Professional and am talking to it using LabVIEW 6.1. I am taking readings from an instrument every 5 seconds, with a small data exchange (6 byte command, response is about 50 bytes). Everything seemed to work just fine, but when left overnight the computer was very sluggish. The nisdsusr process was using 98% of the CPU time, and it continued to do so until I quit LabVIEW. Stopping the vi, which closed the serial port, was not good enough. I re-started LabVIEW and my vi, and I can see the CPU usage jump up to 12%-14% every time I take a reading, and go to basically 0 in between. This is perfectly fine. Why did this process go to 100% CPU usage and how can I prevent it from happen
ing again? I am using the 1.01 serial device server driver, which is the latest version and is XP compatible (aside from the driver signing issue).

I know this thread is old, but should anyone happen upon it...
This problem (I've had it too) happens when network communication is lost and the serial device server goes into some kind of loop waiting for a tcp ack that never comes back. Something like that. To fix it the port has to be closed that was being used then re-opened. I'm guessing this was your problem when leaving it to run... at some point network communications were lost and the driver went haywire and didn't recover.

Similar Messages

  • Why does my hard drive state I have used 500 gb when I have only used 280 gb?

    Why does my hard drive state I have used 500 gb when I have only used approximately 280 gb?

    Have you emptied your trash lately...
    Also... See these links...
    Apple ML = Increase Disc Space
    http://support.apple.com/kb/PH10677
    See Here  >  Where did my Disk Space go?
    And Here  >  The Storage Display

  • Why does my printe say "used supply in use" 1217nfw

    Why does my printe say "used supply in use" model m1217nfw

    Hello,
    The "Used Supply in Use" means that the printer believes the toner cartridges installed in the printer have been previously used in another printer.
    Have you recently changed the toner cartridge on this printer? Are you having issues printing?
    Remember if you find any of my post helpful or want to say thanks make sure to click the white star under my name to give me Kudos.
    I really appreciate it!
    You should be able to still print without problems even though the message comes up.
    THX

  • Why does my printe say "used supply in use/:

    why does my printer say used supply in use?  And I can't print anything

    Pretty sure its to make sure you don't get good use out of your cartridges... basically this company trying to get 10 cents a print out of you
    After 1100 prints, regardless of how much ink used, if you take it out (the genuine 85a) at this point and put it back in, say after a jam or you want to shake it, it tells you have a used cartridge, i have tested continually using this, and got 500 more prints, with an annoying nag that you cant stop from every page printed. you have to take it out because this printer is notorious for a section ink don't get too, even though there probably is over 50% of ink left in it.. you take it out and shake it and can print, hundreds and hundreds of more pages... but after the 1100 point mark, since the printer tracks statistics it goes into this "used mode" so averages of prints won't go up... im pretty sure its devious code... but what do i know.. i just wont be getting new HP printers.. they claim it can cause damage.... the heads are on the toner cartridge... if its damaged the cartridge is replaced and thats that... so their warnigs are smoke and mirrors..

  • Why does the Java API use int instead of short or byte?

    Why does the Java API use int if short or even byte would be sufficient?
    Example: The DAY_OF_WEEK field in Calendar uses int.

    One of the point is, on the benchmark tests on Java performance, int does far better than short and byte data types.
    Please follow the below blog talks about the same.
    Java Primative Speed
    -K

  • Why does it say im using 3.8gb of mail when my mail is deleted?

    why does it say im using 3.8gb of mail when my mail is deleted< yes even the trash

    Quit Mail. Force quit if necessary.
    Back up all data. That means you know you can restore the Mail database, no matter what happens.
    Triple-click the text on the line below to select it:
    ~/Library/Mail/V2/MailData/Envelope Index
    Copy the selected text to the Clipboard (command-C). In the Finder, select
    Go ▹ Go to Folder
    from the menu bar. Paste into the box that opens (command-V), then press return.
    A Finder window will open with a file selected. Move the selected file to the Desktop, leaving the window open. Other files in the folder may have names that begin with "Envelope Index". Move those files, if any, to the Trash.
    Log out and log back in. Relaunch Mail. It should prompt you to re-import your messages. You may get a warning that the index is corrupt and that Mail has to quit. Click OK.
    Test. If Mail now works as expected, you can delete the file you moved to the Desktop. Otherwise, post your results.

  • Why does my dvd drive not load

    why does my dvd drive not load

    It might be broken. Take your machine to your local Apple Store or an AASP for a free diagnosis and estimate for repair.
    Clinton

  • Why does not my query use an index?

    I have a table with some processed rows (state: 9) and some unprocessed rows (states: 0,1,2,3,4).
    This table has over 120000 rows, but this number will grow.
    Most of the rows are processed and most of them also contain a group id. Number of groups is relatively small (let's assume 20).
    I would like to obtain the oldest some_date for every group. This values has to be outer joined to a on-line report (contains one row for each group).
    Here is my set-up:
    Tested on: 10.2.0.4 (Solaris), 10.2.0.1 (WinXp)
    drop table t purge;
    create table t(
      id number not null primary key,
      grp_id number,
      state number,
      some_date date,
      pad char(200)
    insert into t(id, grp_id, state, some_date, pad)
    select level,
         trunc(dbms_random.value(0,20)),
            9,
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 120000;
    insert into t(id, grp_id, state, some_date, pad)
    select level + 120000,
         trunc(dbms_random.value(0,20)),
            trunc(dbms_random.value(0,5)),
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 2000;
    commit;
    exec dbms_stats.gather_table_stats(user, 'T', estimate_percent=>100, method_opt=>'FOR ALL COLUMNS SIZE 1');
    Tom Kyte's printtab
    ==============================
    TABLE_NAME                    : T
    NUM_ROWS                      : 122000
    BLOCKS                        : 3834I know, this could be easily solved by fast refresh on commit materialized view like this:
    select
      grp_id,
      min(some_date),
    from
      t
    where
      state in (0,1,2,3,4)
    grpup by
      grp_id;+ I have to create log on (grp_id, some_date, state)
    Number of rows with active state will be always relatively small. Let's assume 1000-2000.
    So my another idea was to create a selective index. An index which would contain only data for rows with an active state.
    Something like this:
    create index fidx_active on t ( 
      case state 
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end,
      case state
        when 0 then some_date
        when 1 then some_date
        when 2 then some_date
        when 3 then some_date
        when 4 then some_date
      end) compress 1; so a tuple (group_id, some_date) is projected to tuple (null, null) when the state is not an active state and therefore it is not indexed.
    We can save even more space by compressing 1st expression.
    analyze index idx_grp_state_date validate structure;
    select * from index_stats
    @pr
    Tom Kyte's printtab
    ==============================
    HEIGHT                        : 2
    BLOCKS                        : 16
    NAME                          : FIDX_ACTIV
    LF_ROWS                       : 2000 <-- we're indexing only active rows
    LF_BLKS                       : 6 <-- small index: 1 root block with 6 leaf blocks
    BR_ROWS                       : 5
    BR_BLKS                       : 1
    DISTINCT_KEYS                 : 2000
    PCT_USED                      : 69
    PRE_ROWS                      : 25
    PRE_ROWS_LEN                  : 224
    OPT_CMPR_COUNT                : 1
    OPT_CMPR_PCTSAVE              : 0Note: @pr is a Tom Kyte's print table script adopted by Tanel Poder (I'm using Tanel's library) .
    Then I created a query to be outer joined to the report (report contains a row for every group).
    I want to achieve a full scan of the index.
    select
      case state -- 1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end grp_id,
      min(case state --second expression
            when 0 then some_date
            when 1 then some_date
            when 2 then some_date
            when 3 then some_date
            when 4 then some_date
          end) as mintime
    from t 
    where
      case state --1st expression: at least one index column has to be not null
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end is not null
    group by
      case state --1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end;-------------
    Doc's snippet:
    13.5.3.6 Full Scans
    A full scan is available if a predicate references one of the columns in the index. The predicate does not need to be an index driver. A full scan is also available when there is no predicate, if both the following conditions are met:
    All of the columns in the table referenced in the query are included in the index.
    At least one of the index columns is not null.
    A full scan can be used to eliminate a sort operation, because the data is ordered by the index key. It reads the blocks singly.
    13.5.3.7 Fast Full Index Scans
    Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.
    You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes.
    A fast full scan is faster than a normal full index scan in that it can use multiblock I/O and can be parallelized just like a table scan.
    So the question is: Why does oracle do a full table scan?
    Everything needed is in the index and one expression is not null, but index (fast) full scan is not even considered by CBO (I did a 10053 trace)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     85 |     20 |00:00:00.11 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |   2000 |00:00:00.10 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
                  THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL)Let's try some minimalistic examples. Firstly with no FBI.
    create index idx_grp_id on t(grp_id);
    select grp_id,
           min(grp_id) min
    from t
    where grp_id is not null
    group by grp_id;
    | Id  | Operation             | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |   1 |  HASH GROUP BY        |            |      1 |     20 |     20 |00:00:01.00 |     244 |    237 |
    |*  2 |   INDEX FAST FULL SCAN| IDX_GRP_ID |      1 |    122K|    122K|00:00:00.54 |     244 |    237 |
    Predicate Information (identified by operation id):
       2 - filter("GRP_ID" IS NOT NULL)This kind of output I was expected to see with FBI. Index FFS was used although grp_id has no NOT NULL constraint.
    Let's try a simple FBI.
    create index fidx_grp_id on t(trunc(grp_id));
    select trunc(grp_id),
           min(trunc(grp_id)) min
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id);
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.94 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)Again, index (fast) full scan not even considered by CBO.
    I tried:
    alter table t modify grp_id not null;
    alter table t add constraint trunc_not_null check (trunc(grp_id) is not null);I even tried to set table hidden column (SYS_NC00008$) to NOT NULL
    It has no effect, FTS is still used..
    Let's try another query:
    select distinct trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    | Id  | Operation             | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH UNIQUE          |             |      1 |     20 |     20 |00:00:00.85 |     244 |
    |*  2 |   INDEX FAST FULL SCAN| FIDX_GRP_ID |      1 |    122K|    122K|00:00:00.49 |     244 |
    Predicate Information (identified by operation id):
       2 - filter("T"."SYS_NC00008$" IS NOT NULL)Here the index FFS is used..
    Let's try one more query, very similar to the above query:
    select trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.86 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |    122K|    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)And again no index full scan..
    So my next question is:
    What are the restrictions which prevent index (fast) fullscan to be used in these scenarios?
    Thank you very much for your answers.
    Edited by: user1175494 on 16.11.2010 15:23
    Edited by: user1175494 on 16.11.2010 15:25

    I'll start off with the caveat that i'm no Johnathan Lewis so hopefully someone will be able to come along and give you a more coherent explanation than i'm going to attempt here.
    It looks like the application of the MIN function against the case statement is confusing the optimizer and disallowing the usage of your FBI. I tested this against my 11.2.0.1 instance and your query chooses the fast full scan without being nudged in the right direction.
    That being said, i was able to get this to use a fast full scan on my 10 instance, but i had to jiggle the wires a bit. I modified your original query slightly, just to make it easier to do my fiddling.
    original (in the sense that it still takes the full table scan) query
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation        | Name | Starts | E-Rows | A-Rows |      A-Time   | Buffers |
    |   1 |  HASH GROUP BY        |       |      2 |      33 |       40 |00:00:00.07 |    7646 |
    |*  2 |   TABLE ACCESS FULL| T       |      2 |      33 |     4000 |00:00:00.08 |    7646 |
    Predicate Information (identified by operation id):
       2 - filter((CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
               THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL AND
               CASE "STATE" WHEN 0 THEN "SOME_DATE" WHEN 1 THEN "SOME_DATE" WHEN 2 THEN
               "SOME_DATE" WHEN 3 THEN "SOME_DATE" WHEN 4 THEN "SOME_DATE" END  IS NOT
               NULL))
    modified version where we prevent the MIN function from being applied too early, by using ROWNUM
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
      and rownum > 0
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY           |            |      2 |     20 |     40 |00:00:00.01 |      18 |
    |   2 |   VIEW                |            |      2 |     33 |   4000 |00:00:00.07 |      18 |
    |   3 |    COUNT           |            |      2 |      |   4000 |00:00:00.05 |      18 |
    |*  4 |     FILTER           |            |      2 |      |   4000 |00:00:00.03 |      18 |
    |*  5 |      INDEX FAST FULL SCAN| FIDX_ACTIVE |      2 |     33 |   4000 |00:00:00.01 |      18 |
    Predicate Information (identified by operation id):
       4 - filter(ROWNUM>0)
       5 - filter(("T"."SYS_NC00006$" IS NOT NULL AND "T"."SYS_NC00007$" IS NOT NULL))

  • Why does my external drive disappear from my desktop?

    I am switching from PC to Mac. I have 2 externaldrives. Both have been formatted in FAT 32 by the MAC. When I plug them in the desktop and finder see them, then with both of them they have disappeared from the desktop and from finder. Still can be seen in disk utility and system profiler. How can I find them and why does this happen?

    If you daisy chained then try without. May not be enough power to handle both at sme time. Other than that read some info:
    1. Shut-down your Mac, and unplug the power cord
    2. Turn the power off on your external FireWire devices
    3. Unplug the FireWire devices from the Mac
    4. Wait for 5 min.
    5. Plug the power cord to your Mac only
    6. For PPC Macs: Restart the Mac while holding the Option-Apple-O-F, and keep holding until you get the ">" prompt, then release the keys
    7. At the ">" prompt type:
          reset-nvram and hit the Return key
          set-defaults and hit the Return key
          reset-all and hit the Return key
    the last command will restart your mac
    8. Shut down your Mac
    9. Connect all your FireWire devices to the Mac and turn them on
    10. Restart your Mac.
    All your FireWire devices should reapear, if not repeat the procedure
    How to avoid the issue :
    The only really proven way to avoid burning up a FireWire port is to connect all devices and to turn them on PRIOR turning on the Mac. Likely, one must unplug them and turn them off AFTER the Mac has been turned off. If you need to connect another device, then you're on for a shutdown of your machine... 
It's a tad annoying but it guarantees that the FW ports won't be damaged. 
Be careful when using self powered devices such as webcams, iPods, hard drives or hubs, as they can destroy the port pretty easily. Another thing is to avoid daisy-chaining hard drives.
    When the FW port doesn't respond anymore :
    In this case, peripherals won't be mounted upon plugging, and won't be displayed in Apple's System Profiler. The self powered devices will still be fed by the port, but won't respond either. 
It happens that the PHY just hangs after a surge or a random problem. Once hung, the port will not respond any longer, it is possible to reset the component by going through the following steps :
    1° boot the PPC mac in Open Firmware by holding [ Apple key - Option - O - F ] after the startup chime. 
2° you'll get to a command prompt. the keyboard mapping will be QWERTY, so pay attention when you type the following : 
RESET-NVRAM (enter) 
RESET-ALL (enter) 
3° Now the mac should restart itself and the port should function properly again. 
If it still doesn't work, then it means that the PHY is damaged.
    http://www.hardmac.com/articles/16/
    From Kappy post:
    922 is only used in dual drive cases to provide a "build-in firmware RAID support"
    912 or 912+ is FW800, the "+" means USB2
    911 is FW400
    924 adds eSATA support along with FW400/800
    MacSales, OWC Tech Support library is where I would go first, as they do tend to have or links to, firmware updates for Oxford.
    From Wiebetech:
    I just upgraded my Mac OS, and now I'm having trouble with my FireWire device. What can I do?
    A: With nearly every OS update Apple releases, some external drives are not recognized after the update. The typical symptom is that the device will not mount but the volume is visible and grayed out in Disk Utility. This means that the device is working properly, but the OS does not recognize the volume as mountable. Sometimes the drive does not show up at all.
    The first thing to do is to see if repairing Permissions with Apple's Disk Utility First Aid feature solves the problem. If not, try repairing the disk with First Aid. Another possible solution is to zap the PRAM. This is done with a keyboard command while rebooting. Restart your computer and hold down the Command, Option, P and R keys. Consult the Apple web site for details.
    If this does not work, then try pushing the PMMU reset button on the logic board. Consult the Apple web site for locations of the button for your Mac. If the drive still does not show up, shut down, disconnect all peripheral devices, and boot from the Mac OS Installation CD. The later the OS version, the better. Do not reinstall the OS, but when the Installer is loaded up and ready to begin, go to the File menu and use the Disk Utility program. Connect the drive that is having problems, then start the Disk Utility application. If you see the external drive and it is not grayed out, you know the drive is okay. Run Disk First Aid on it anyway. Then run it on your internal boot drive while you are at it. When done, unmount the external drive and UNPLUG it! Then restart the Mac.
    When the Mac has restarted and is running on the internal boot drive, reconnect the external drive.

  • Why does SCHDNEGACTION scheduler job have high CPU usage?

    Hi,
    In my embeded device(double CPU 1.5 G HZ ) SCHDNEGACTION scheduler job has 90% CPU usage. Why does the scheduler job have so high CPU usage? What is the job doing for?

    Hi,
    This job is not created by the scheduler. You will have to find out what component creates the job and what exactly it does to figure out why it is using high cpu and what would be the effects of disabling it (which would prevent it from running).
    Hope this helps,
    Ravi.

  • Why does Lightroom (and Photoshop) use AdobeRGB and/or ProPhoto RGB as default color spaces, when most monitors are standard gamut (sRGB) and cannot display the benefits of those wider gamuts?

    I've asked this in a couple other places online as I try to wrap my head around color management, but the answer continues to elude me. That, or I've had it explained and I just didn't comprehend. So I continue. My confusion is this: everywhere it seems, experts and gurus and teachers and generally good, kind people of knowledge claim the benefits (in most instances, though not all) of working in AdobeRGB and ProPhoto RGB. And yet nobody seems to mention that the majority of people - including presumably many of those championing the wider gamut color spaces - are working on standard gamut displays. And to my mind, this is a huge oversight. What it means is, at best, those working this way are seeing nothing different than photos edited/output in sRGB, because [fortunately] the photos they took didn't include colors that exceeded sRGB's real estate. But at worst, they're editing blind, and probably messing up their work. That landscape they shot with all those lush greens that sRGB can't handle? Well, if they're working in AdobeRGB on a standard gamut display, they can't see those greens either. So, as I understand it, the color managed software is going to algorithmically reign in that wild green and bring it down to sRGB's turf (and this I believe is where relative and perceptual rendering intents come into play), and give them the best approximation, within the display's gamut capabilities. But now this person is editing thinking they're in AdobeRGB, thinking that green is AdobeRGB's green, but it's not. So any changes they make to this image, they're making to an image that's displaying to their eyes as sRGB, even if the color space is, technically, AdobeRGB. So they save, output this image as an AdobeRGB file, unaware that [they] altered it seeing inaccurate color. The person who opens this file on a wide gamut monitor, in the appropriate (wide gamut) color space, is now going to see this image "accurately" for the first time. Only it was edited by someone who hadn't seen it accurately. So who know what it looks like. And if the person who edited it is there, they'd be like, "wait, that's not what I sent you!"
    Am I wrong? I feel like I'm in the Twilight Zone. I shoot everything RAW, and I someday would love to see these photos opened up in a nice, big color space. And since they're RAW, I will, and probably not too far in the future. But right now I export everything to sRGB, because - internet standards aside - I don't know anybody who I'd share my photos with, who has a wide gamut monitor. I mean, as far as I know, most standard gamut monitors can't even display 100% sRGB! I just bought a really nice QHD display marketed toward design and photography professionals, and I don't think it's 100. I thought of getting the wide gamut version, but was advised to stay away because so much of my day-to-day usage would be with things that didn't utilize those gamuts, and generally speaking, my colors would be off. So I went with the standard gamut, like 99% of everybody else.
    So what should I do? As it is, I have my Photoshop color space set to sRGB. I just read that Lightroom as its default uses ProPhoto in the Develop module, and AdobeRGB in the Library (for previews and such).
    Thanks for any help!
    Michael

    Okay. Going bigger is better, do so when you can (in 16-bit). Darn, those TIFs are big though. So, ideally, one really doesn't want to take the picture to Photoshop until one has to, right? Because as long as it's in LR, it's going to be a comparatively small file (a dozen or two MBs vs say 150 as a TIF). And doesn't LR's develop module use the same 'engine' or something, as ACR plug-in? So if your adjustments are basic, able to be done in either LR Develop, or PS ACR, all things being equal, choose to stay in LR?
    ssprengel Apr 28, 2015 9:40 PM
    PS RGB Workspace:  ProPhotoRGB and I convert any 8-bit documents to 16-bit before doing any adjustments.
    Why does one convert 8-bit pics to 16-bit? Not sure if this is an apt comparison, but it seems to me that that's kind of like upscaling, in video. Which I've always taken to mean adding redundant information to a file so that it 'fits' the larger canvas, but to no material improvement. In the case of video, I think I'd rather watch a 1080p movie on an HD (1080) screen (here I go again with my pixel-to-pixel prejudice), than watch a 1080p movie on a 4K TV, upscaled. But I'm ready to be wrong here, too. Maybe there would be no discernible difference? Maybe even though the source material were 1080p, I could still sit closer to the 4K TV, because of the smaller and more densely packed array of pixels. Or maybe I only get that benefit when it's a 4K picture on a 4K screen? Anyway, this is probably a different can of worms. I'm assuming that in the case of photo editing, converting from 8 to 16-bit allows one more room to work before bad things start to happen?
    I'm recent to Lightroom and still in the process of organizing from Aperture. Being forced to "this is your life" through all the years (I don't recommend!), I realize probably all of my pictures older than 7 years ago are jpeg, and probably low-fi at that. I'm wondering how I should handle them, if and when I do. I'm noting your settings, ssprengel.
    ssprengel Apr 28, 2015 9:40 PM
    I save my PS intermediate or final master copy of my work as a 16-bit TIF still in the ProPhotoRGB, and only when I'm ready to share the image do I convert to sRGB then 8-bits, in that order, then do File / Save As: Format=JPG.
    Part of the same question, I guess - why convert back to 8-bits? Is it for the recipient?  Do some machines not read 16-bit? Something else?
    For those of you working in these larger color spaces and not working with a wide gamut display, I'd love to know if there are any reasons you choose not to. Because I guess my biggest concern in all of this has been tied to what we're potentially losing by not seeing the breadth of the color space we work in represented while making value adjustments to our images. Based on what several have said here, it seems that the instances when our displays are unable to represent something as intended are infrequent, and when they do arise, they're usually not extreme.
    Simon G E Garrett Apr 29, 2015 4:57 AM
    With 8 bits, there are 256 possible values.  If you use those 8 bits to cover a wider range of colours, then the difference between two adjacent values - between 100 and 101, say - is a larger difference in colour.  With ProPhoto RGB in 8-bits there is a chance that this is visible, so a smooth colour wedge might look like a staircase.  Hence ProPhoto RGB files might need to be kept as 16-bit TIFs, which of course are much, much bigger than 8-bit jpegs.
    Over the course of my 'studies' I came across a side-by-side comparison of either two color spaces and how they handled value gradations, or 8-bit vs 16-bit in the same color space. One was a very smooth gradient, and the other was more like a series of columns, or as you say, a staircase. Maybe it was comparing sRGB with AdobeRGB, both as 8-bit. And how they handled the same "section" of value change. They're both working with 256 choices, right? So there might be some instances where, in 8-bit, the (numerically) same segment of values is smoother in sRGB than in AdobeRGB, no? Because of the example Simon illustrated above?
    Oh, also -- in my Lumix LX100 the options for color space are sRGB or AdobeRGB. Am I correct to say that when I'm shooting RAW, these are irrelevant or ignored? I know there are instances (certain camera effects) where the camera forces the shot as a jpeg, and usually in that instance I believe it will be forced sRGB.
    Thanks again. I think it's time to change some settings..

  • Why does my camera roll use 10GB of my space, when I only have 508 photo's and video's on my camera roll?

    I have some problems with the space on my Iphone 4s. I always get a message that says that I can't rec or can't take another picture because I don't have enough space on my phone. When I go to settings and check, my photo's and video's use almost 10GB (9,8GB). I have 407 pictures and 101 video's (not very long videos, more like short but many). In total i have 508 photo's and video's in my camera roll. My friends have like over a thousand pictures and video's, but they haven't used like 10GB of their space. So why does my 508 picutures and videos use som much more space??

    Videos add up fast.  A 30 second video on a 4S could easily be 85 MB.  101 of these would be almost 8.6 GB.  Add 407 photos at 3 MB each and you reach 9.8 GB.
    I would suggest you import some of these to your computer (especially some of the vidoes) and delete them from your phone, as explained here: http://support.apple.com/kb/HT4083.

  • Why does moving the cursor peg the cpu in numbers?

    Why does moving the cursor, even by one cell, peg the cpu in numbers?

    Hello
    Before posting yesterdays, I tested with a table of 4435 x 8 cells on the iMac described below.
    I will be more precise than yesterdays.
    Numbers recalculate the entire document after every change. A single new character is sufficient.
    Twenty years ago, AppleWorks designers were kind enough to recalculate only what really needed.
    It seems that new developers are not relying upon intelligence but upon processor's brute force.
    Their code will be efficient … on the machines which will be available within at least five years.
    Alas, at this time there will no longer be compatible with the operating system.
    Given this awful coding, when AutoSave apply, it must write on disk an entire new index.xml file.
    Same thing when Versions apply.
    Under SnowLeopard these two features don't strike.
    If you wish, you may send your 'offending' document to my mailbox so I will be able to check the way it behave here. Don’t worry, I don't take care of what is really stored, only of the doc's behavior.
    Click my blue name to get my address.
    Yvan KOENIG (VALLAURIS, France) jeudi 12 janvier 2012
    iMac 21”5, i7, 2.8 GHz, 12 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.2

  • Why does my hard drive switch on every so often

    Why does my macs Harddrive keep switching on every so often

    No - It happens when I am NOT using the computer - unless it thinks it is backing-up - but I have not set my Time machine to do this.

  • Why does the macbook pro use both types of storage?

    Why does the 13inch retina have both the 8gb hardrive and the flashdrive?

    Oops reading your question again, I see there is a misunderstanding:
    http://www.apple.com/macbook-pro/specs-retina/
    The 8GB is not hard drive memory.  It is chip memory just for multi-tasking purposes, and it can be custom configured to 16 GB of multi-tasking memory.  If you wanted to store applications or data files on it, you can create a RAM disk, but it actually would be copying the information off the SSD.  There is no "permanent" storage of data on the 8GB or 16GB of RAM.
    There is no platter hard drive anymore.
    The 128 GB is the SSD memory on the 13" model by default, and can be custom configured to 512GB of SSD memory.
    The odd thing is your SSD can do swap file storage for multitasking, but also doubles as your place to store actual data files.
    For awhile Apple mixed both platter and SSD.  Looks like that is no longer the case at least with the 13" Retina.
    SSD memory is comparable to hard drive memory in that it is more "permanent" storage after reboot.  Calling it permanent is a misnomer though, since hard drives and SSDs can both fail without any warning, and a backup is always recommended:
    https://discussions.apple.com/docs/DOC-1992

Maybe you are looking for

  • Is there a way to disable the itunes video pop up for music videos in itunes 11?

    I'm wondering if there is a way to disable the video window from popping up everytime I change a music video track in itunes in previous versions I always used play in artwork viewer but I accidentally updated and now that option is no longer there. 

  • Sending a message to fax using JavaMail

    Hello all, we are trying to create a system to send messages. The input will come from an Oracle 9i (9.0.2) database, our goal is to create email messages which can either be delivered to an email address or to a fax number. The systems works if all

  • Column filter issue in  CSV format of BOXIR3

    Hi , Currently in my infoview, supposing I am a webi document, in which I have 4 columns selected in a report from the data provider of the document (Assuming that the data provider has 10 columns). Now, when I am trying to save this document on my c

  • How to back up an unhealthy hard drive

    My MacBook pro will start up but it will not boot up, I've tried many different things and they have all failed. I took my MacBook to a computer store and they told me that my hard drive is unhealthy and needs to be repaired. Is there any way of gett

  • Problem with rounded in casting

    Hi all I have an int and a float, thjat could be 1, 0.1, 0.01. Then I need two operations: int / float and int * float; I am interested only on a integer part, but if I have (int) (int * float) I have a rounded error. Is there a method for solve this