Is it possible to improve scores with large collections? Advice please!

Hi, I have about $7000 in collections from Natl Crysis from broken apartment leases due to job cutbacks and job loss in 2014. I've been able to increase my scores out of the 490s now to the low 500s since June. I was able to get an auto loan through Toyota Financial (miraculously!) and payments start on the 15th of this month. I have a secured card through a credit union but it only reports to Experian and Transunion. I've been declined for pretty much every other secured card I've applied for...which I didn't know was possible, but still have a couple options left like OpenSky. Fingerhut is reporting positively as well. Assuming from here on out I pay my bills timely and maybe open one more secured card, can my scores continue to increase? Or will the huge amount I owe in collections hold me back until they fall off in 5-6 years? I've got some other small collection accounts around $1000 total that I could pay off if that would help but the ones from Natl Crysis I've pretty much decided I can't pay any time soon since it's just such a large chunk of money even if I settle. What do I do?

rockbottom11 wrote:
Hi, I have about $7000 in collections from Natl Crysis from broken apartment leases due to job cutbacks and job loss in 2014. I've been able to increase my scores out of the 490s now to the low 500s since June. I was able to get an auto loan through Toyota Financial (miraculously!) and payments start on the 15th of this month. I have a secured card through a credit union but it only reports to Experian and Transunion. I've been declined for pretty much every other secured card I've applied for...which I didn't know was possible, but still have a couple options left like OpenSky. Fingerhut is reporting positively as well. Assuming from here on out I pay my bills timely and maybe open one more secured card, can my scores continue to increase? Or will the huge amount I owe in collections hold me back until they fall off in 5-6 years? I've got some other small collection accounts around $1000 total that I could pay off if that would help but the ones from Natl Crysis I've pretty much decided I can't pay any time soon since it's just such a large chunk of money even if I settle. What do I do?i have 1 paid collection and 6 late payments (120+ days) from a Cap 1 card on TU/EX, my scores are 670/686. 6 late payments on EQ with no collection, EQ is at 700.   so yeah, get some revolving credits rolling then ur scores should go up.   the amount of the collection doesnt matter i think, a paid/unpaid collection have the same affect on your scores.  

Similar Messages

  • I have photoshop 5.0 limited edition and can't load it onto my pc with window 7. advice please? than

    i have photoshop 5.0 limited edition and can't load it onto my pc with window 7. advice please? thanks

    It is not compatible with Windows 7.  Advice could include telling you to purchase the latest version.  The other option would be to buy an old computer that is capable of running it.

  • Possible hard drive problem with MacBook Pro? PLEASE HELP!!!

    MacBook Pro loads to login screen with my personal account (which shows a grey silhouette) my student account(shows custom pic) and guest account(same grey silhouette) there is no feedback when I click on anything. I have restarted it multiple times & don't get anywhere with it. I just updated stuff on the macbook prior to this incident. I need help because I have all of my business, school, and personal things on this machine. What could be the cause of this? Why is it happening? I need it fixed ASAP as I have work that needs to be completed
    P.S. I am unsure of the exact operating system but it was the Mountain Lion that it came with. I was planning on upgrading to Mavericks tonight until this problem arose

    "Any drive can fail at any time."
    Don't update anything until you have a large-ish external drive and Time Machine or other backup working.
    You dodged a bullet this time. Next time, maybe you will not be so lucky.

  • Possible Bug in DI with Gross Profit? Please confirm.

    The following works:
    1) Add an Inventory Item and set the "Items per Sales Unit" field to "0.1". The "Items per Purchase Unit" is set to "1".
    2) Create a goods receipt for 1 purchase unit.
    3) Create an A/R invoice for 1 unit.
    This should add a line item for 0.1 inventory units and set the price to 0.1 * <itemprice> which it does.
    4) Click on the Gross Profit money bag and everything displays correctly. The journal entry for COGS is also correct.
    What doesn't work:
    If you try the above using the DI API then the journal entry for COGS is correct, but when you find the A/R invoice and click the gross profit money bag then it shows that you sold the item at a loss since it uses the full purchase price unit vs the partial sales unit.
    If you then go an manually add an A/R invoice for the item added through DI API then gross profit calculates correctly.
    There seems to be a problem with the gross profit when you add the invoice through DI API. Is there a parameter when adding SO or AR Invoices that I could be missing?
    System Info:
    SBO 2005 A (6.80.123)
    SQL Server
    Document Settings > Base Price Origin set to Item Cost

    More information:
    After further research I found that the DI API is setting the unit to the Sales Unit (which is correct), but it sets the price to the Inventory Unit price.
    When I set the price via the DI API to what it should be then the discount column was set to 90%.
    Is this by design? If so, what do I need to do to correct this problem?

  • Pavillion dv6-2150 needs a larger HD; advice please

    I need a new hard drive with more space on my Pavillion dv6-2150 laptop.  It reports 284GB total space.  Where on the HP site should I go to find suitable replacements for this model?
    Also, since it was bought used and I have no W7 64bit disk, if I replace the HD will I have to buy W7 or is there a way to put the existing OS (maybe from the D: recovery drive??) on the new hard drive?
    Thanks,
    Mike

    Hi, Mike:
    Below is the link to the service manual for your notebook.
    http://h10032.www1.hp.com/ctg/Manual/c01860375.pdf
    Chapter 3 lists the available HDD's.
    You would be safe going to a 750 GB SATA II notebook hard drive.
    You can order recovery disks by clicking on the link below and then click on the link to order recovery media.
    http://h10025.www1.hp.com/ewfrf/wc/softwareCategory?os=4063&lc=en&cc=us&dlc=en&sw_lang=&product=4121...
    You may be interested in this drive. It is a good one.
    It is on sale and there is an additional $15.00 off with the promo code.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136835
    If you can read the Windows 7 product key on the PC, you can also make your own W7 installation disk by reading the info below.
    If you can read the 25 character Microsoft windows 7 product key, you can download plain Windows 7 ISO files to burn to a DVD for the version of windows that came installed on your PC, and that is listed on the Microsoft COA sticker on your PC's case.
    Burn the ISO using the Burn ISO option on your DVD burning program and burn at the slowest possible speed your program will allow. This will create a bootable DVD.
    Or use the Windows 7 USB/DVD installation tool to compile the ISO file you download from Digital River. Link and instructions below. You need a 4 GB flash drive to use the USB method of compilation.
    http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool
    Use the 25 character product key on the PC to activate the installation.
    The key will activate either a 32 or 64 bit installation.
    Then go to the PC's support and driver page to install the drivers you need.
    Link to the W7 ISO file downloads is below.
    http://www.mydigitallife.info/official-windows-7-sp1-iso-from-digital-river/

  • Issues with family sharing - Advice please

    I am trying to get family sharing to work and believe I have followed all instructions.
    - 2 iphones 6 all on iOS 8
    - I have invited to family sharing with my account and turned on sharing. I am signed into the apps/iTunes store wit the same account I am using for Family Sharing
    - my spouse has accepted the invite; she has her own Apple ID and iCloud, but is signed in into Apps/iTunes store with MY Apple ID as requested.; she also shares her purchases
    I am able to see what she has purchased but she is not able to see mine. She is getting an error message that her account is associated to a different iCloud?
    Very confusing. Any solution?

    Posted too soon.
    Here's what we're running into:  He goes to the App store, locates the app, clicks 'Get' and 'Install', is prompted for me to enter the security code for our credit card, etc.  Then he receives a message that indicates that a family member has already purchased the app, etc.  He clicks 'ok' and the App store seems to try to proceed with downloading it.  However, before it can go anywhere, it stalls out with a 'Cannot connect to iTunes' error.
    This seems to be isolated to the older iPods.  We know the app itself is compatible with v6.1.6, and the device doesn't appear to be having any issues connecting to the iTunes store for any other apps.  What should he do?

  • ITunes Extremely Slow with large library

    I recently inherited a huge amount of music (36,000 tracks/211 gigs) which I keep on a Time Capsule as an external drive connect to an Airport Express which I use as a bridge to my home stereo. iTunes is pointed to the TC as the music source and there is no music content on my Macbook's HD, only the iTunes library itself and the cover art. I believe I have the standard RAM (black Macbook, purchased June 2007, which I think is 1gig).
    iTunes is not extremely slow in cover flow, I get constant spinning balls and it takes quite a while for the program to close. Do I need more RAM (I think there is a 2gig max for my Macbook)? Is 36,000 tracks more than iTunes was designed to handle? Any chance storing the content on an external drive via WiFi affects the iTunes library response?

    I haven't had experience with newer Intel macs with this issue, so I'm a bit hesitant to try to answer, but here goes. This may not at ALL apply to what you're dealing with.
    The problem of incredibly slow behavior with large libraries in iTunes has been an issue since at LEAST 7.0 if not much earlier. I have over 130,000 tracks in my library, stored on a WD 1TB studio drive connected to my "home media server - a G4 Cube" via firewire and shared over ethernet to 3 macs. Do a search for "slow large library" in these forums and you will see NUMEROUS threads all complaining about various issues people seem to have when the number of tracks gets into the tens of thousands, much less over 100k.
    In any case, there are some things I've found that seem to help:
    1. Keep the actual library (database) files on boot drive. IF you have the music files on a separate drive, it helps to keep the iTunes Library folder on the internal drive with the coverart where it can be accessed faster. This is the database (xml) file that iTunes loads and saves when changes are made to the tags or tracks.
    2. Also, if you've not got a lot of playlists to worry about saving (i don't use playlists much, just play cds out of the browser mostly, or have some smart playlists).. it helps to rebuild the iTunes library from scratch at some point. I have no reason to believe this, but I assume it creates a new optimized database this way. (maybe someone can correct me on this... after say months of retagging tracks, playing music, creating and deleting playlists etc... does the database end up 'messier' OR does it correctly rewrite everything and optimize constantly or when you quit itunes?) What I DO know is that if you shorten the path to the music, it will be a smaller file. So don't make it "My Firewire Drive/Music Files/iTunes Files/CDs and MP3s/By Artists/*" this gets written as the path for EACH track in the library file that it then needs to load. Make it "Music/Artists/*" or something as short as you can. Then, what I did was to set iTunes to NOT copy music to the library folder, and starting with a blank iTunes dragged the icon of my shared music folder containing all the mp3 files into the iTunes window. It took overnight and the better part of the next day to read in all 130k files, copy the embedded coverart to the local folder, and the longest part, "determining gapless playback" --which i wish they'd give us a way to disable, takes forever! and i dont' use it (if anyone knows how to stop gapless scanning on a mac, let us know)... but fortunately you only have to do this once.
    3. More RAM. My main computer is an MDD Dual 1ghz G4. Until about a month ago I had a gig of ram, then I finally got around to upgrading to 2 gigs. It DEFINATELY made a big difference. Where before I'd see the rainbow ball for 30 seconds whenever I made playlists or changes to tags.. now I often don't see it at all, or certainly much less.
    I read not long ago an article talking about WHY iTunes was so bad at scaling up for larger libraries. The author suggested it was due to the library being saved out in an XML database and that unless Apple changes its backend to iTunes, it will not be possible to squeeze much more performance out of it. This probably makes it useless for DJs, radio stations, classical music lovers or anyone with large collections of music with many tracks. I assumed that at some point we'd see Apple rewrite iTunes to take advantage of the SQL database in the OS level, but who knows. I really don't think I understand database stuff at this level.
    **Finally... I HAVE noticed that recently it has seemed to be getting more responsive. On one of my other machines (an iMac 800mhz g4!) running iTunes used to be a frustrating proposition with constant spinning beachballs and hangs before it let you pick music or change tracks. Now its actually usable. I don't know when this got 'better' but I think with either 7.6, 7.6.1 or 7.6.2 they have done something to make it more responsive. At least in my experience. So thats a good development.
    But seriously.. do a search thru the forums and you'll find a lot more about this problem with large libraries.
    Goodluck.

  • Powershell Get-CMDevice Failure on Large Collections

    We are getting an error with the Powershell 'Get-CMDevice' cmdlet with larger collections.  The line we are using is ......
    Get-CMDevice -CollectionName "Test All Desktops and Laptops"
    This works on the smaller collections but with this one we get the error.......
    Get-CMDevice : ConfigMgr Error Object:
    instance of __ExtendedStatus
    Description = "[42000][191][Microsoft][SQL Server Native Client 11.0][SQL
    Server]Some part of your SQL statement is nested too deeply. Rewrite the query or
    break it up into smaller queries.";
    Operation = "ExecQuery";
    ParameterInfo = "SELECT * FROM SMS_CombinedDeviceResources WHERE
    (ResourceID=16777223 OR ResourceID=16777224 OR ResourceID=16777226 OR
    ResourceID=16777227 OR ResourceID=16777228 OR ResourceID=16777229 OR
    ResourceID=16777244 OR ResourceID=16777250 OR ResourceID=16777251 OR
    ResourceID=16777260 OR ResourceID=16777272 OR ResourceID=16777273 OR
    ResourceID=16777274 OR ResourceID=16777275 OR ResourceID=16777278 OR ..............ResourceID=16780606  OR ResourceID=2097154624  OR ResourceID=2097154626  OR 
    ResourceID=2097154645  OR ResourceID=2097154873 ) ";
        ProviderName = "WinMgmt";
        StatusCode = 2147749889;
    Error Code:
    Failed
    At line:1 char:1
    + Get-CMDevice -CollectionName "Test All Desktops and Laptops"
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (Microsoft.Confi...etDeviceCommand:GetDe 
       viceCommand) [Get-CMDevice], WqlQueryException
        + FullyQualifiedErrorId : UnhandledExeception,Microsoft.ConfigurationManagement. 
       Cmdlets.Collections.Commands.GetDeviceCommand
    The collection we are querying has 668 machines in it (all direct membership).  
    I have tried using 'Set-CMQueryResultMaximum' to set the maximum to say 2000 but this still gets the same error.
    Environment : SCCM 2012 SP1 CU3 
    Has anyone else seen this or found a fix etc?

    Hi, MattB101. This has been fixed in R2.
    Check out my Configuration Manager blog at http://aka.ms/ameltzer

  • Full Conductor's score with Logic.  Is it possible?

    I've heard it's possible to create full conductor scores with Logic. Is it truly possible?

    hi,
    as asher pointed out, making a full conductors score is perfectly possible with logic, but there are issues, little problems, limitations and work arounds that have to be known about to make it a smooth experience.
    also, as asher again pointed out, there are many things that are fast on logic than in other dedicated notators, and in general composing the raw material is easier faster and more flexible than a dedicated notator. at least over on this side of the pond, working in logic and exporting to sibelius for the final result is very standard way of operating. now, there is a PDF to XML application which some of us here (not me though - not tried it) have had great success with. i think william levine who does a lot of orchestrating in logic and recommends it.
    if err...you can pardon the forwardness...i have on line some sample scores that might demonstrate what is possible in logic.
    http://www.rohanstevenson.com/
    and go to concert music. 'black ice' or 'devil' are full orchestral scores.
    finally, if you do want to get your feet wet with scoring in logic, aside from j prischls tutorial, my mine bit of advice is to understand to concepts thoroughly - everything else will follow from that.
    1) score styles. you can have as many as you want on a track/instrument
    2) instrument sets. set a KC for their creation. understand what they are and how you can use them to create parts as well as help you edit. incredibly fast and flexible.

  • Improve score ??

    Hi,
    Back in Mar 2010 , i appeared for 1Z0-051-ENU Oracle Database 11g SQL Fundamental exam and scored 61% ( 60% is passing ), the idea was to complete OCA, however i wasn't regular to do rest of the papers.
    But now i am back on track, and thinking to clear OCA.
    My first question is is it possible I can reattempt 1Z0-051-ENU Oracle Database 11g SQL Fundamental one more time to improve score, i am not sure if this is going to be a good idea or rather i should concentrate on clearing 2nd exam required for OCA i.e. Oracle Database 10g: Administration I or
    second option could be
    if re-attemtping 1Z0-051-ENU Oracle Database 11g SQL Fundamental is not possible since i have already passed that exam, how about giving may be 1Z0-047 Oracle Database SQL Expert ?
    As far as my exp. goes , i have close to 12 years in IT, my db fundamentals from admin perspective are decent, since i work primarily on ERP system & my intention is to move towards db admin
    Regards
    Learner

    bigdelboy wrote:
    learner1 wrote:
    Hi,
    Back in Mar 2010 , i appeared for 1Z0-051-ENU Oracle Database 11g SQL Fundamental exam and scored 61% ( 60% is passing ), the idea was to complete OCA, however i wasn't regular to do rest of the papers.
    But now i am back on track, and thinking to clear OCA.
    My first question is is it possible I can reattempt 1Z0-051-ENU Oracle Database 11g SQL Fundamental one more time to improve score, i am not sure if this is going to be a good idea or rather i should concentrate on clearing 2nd exam required for OCA i.e. Oracle Database 10g: Administration I or
    You are not allowed to re-take an exam once passed.
    Scores are irrelevant. (Though knowledge and experience are not, and a low score is an indicator you may need to study the area further).
    Exam Passes are relevant.
    Only exam passes, not scores, count towards certification.
    Certifications are relevant.
    second option could be
    if re-attemtping 1Z0-051-ENU Oracle Database 11g SQL Fundamental is not possible since i have already passed that exam, how about giving may be 1Z0-047 Oracle Database SQL Expert ?This is a better choice if you want to study SQL further through certification, but be aware the SQL knowledge required is more advanced and the exam harder, but passing does give a certification.
    It is your choice whether to proceed with the OCA you have in mind or the OCE.
    As far as my exp. goes , i have close to 12 years in IT, my db fundamentals from admin perspective are decent, since i work primarily on ERP system & my intention is to move towards db admin
    Regards
    Learner
    Thanks bigdelboy for the clarifications. The reason why i got confused was if i go for a interview after getting OCP certification and the recruiter asks me what's my scorein various papers ... then it might be little awckward to tell, howevere since retaking passed exam is not a option, i guess will start to work more harder for remaining exams.

  • Serial VISA 'Write' -why is it slow to return even with large buffer?

    Hi,
    I'm writing a serial data transfer code 'module' that will run 'in the background' on a cRIO-9014.  I'm a bit perplexed about how VISA write in particular seems to work.
    What I'm seeing is that the VISA Write takes about 177ms to 'return' from a 4096 byte write, even though my write buffer has been set to >> 4096.
    My expectation would be that the write completes near instantly as long as the VISA driver available buffer space is greater than the bytes waiting to be written, and that the write function would only 'slow down' up to the defined VISA timeout value if there was no room in the buffer.
    As such, I thought it would be possible to 'pre-load' the transmit buffer at a high rate, then, by careful selection of the time-out value relative to the baud rate, it would self-throttle once the buffer fills up?
    Based on my testing this is not the case, which leaves me wondering:
    a) If you try to set the transmit buffer to an unsupported value, will you get an error?
    b) Assuming 'yes' to a, what the heck is the purpose of the serial write buffer? I see no difference running with serial buffer size == data chunk size and serial buffer size >> data chunk size??
    QFang
    CLD LabVIEW 7.1 to 2013

    Hi, I can quickly show the low-level part as a png. It's a sub-vi for transferring file segments.  Some things like the thin 'in-line' VI with (s) as the icon were added to help me look at were the hold-up is.  I cropped the image to make it more readable, the cut-off left and right side is just the input and output clusters.
    In a nut-shell, the VISA Write takes as much time to 'return' as it would take to transfer x bytes over y baud rate.  In other words, even though there is suppused to be a (software or hardware) write and read buffer on the com port, the VISA write function seems to block until the message has physically left the port (OR it writes TO the buffer at the same speed the buffer writes out of the port).  This is very unexpected to me, and what prompted me to ask about what the point is of the write buffer in the first place?  -The observations are on a 9014 RT target built in serial port.  Not sure if the same is observed on other targets or other OS's.  [edit: and the observation holds even if transmitting block-sizes of say 4096 with a buffer size of 4096 or 2*4096 or 10 * 4096 etc. I also tried smaller block sizes and larger block sizes with larger still buffers.  I was able to verify that the buffer re-size function does error out if I give it an insane input buffer size request, so I'm taking that to mean that when I assign e.g. a 4MiB buffer space with no error, the write buffer actually IS 4MiB, but I have not found a property to read back what the HW buffer is, so all I have to base that on is the lack of an error during buffer size setting.) [\edit\]
    The rest of the code is somewhat irrelelvant to this discussion, however, to better understand it, the idea is that the remote side of the connection will request various things, including a file.  The remote side can request a file as a stream of messages each of size 'Block Size (bytes)', or it can request a particular block (for handling e.g. re-transmission if file MD5 checksum does not match).   The other main reason for doing block transfers is that VISA Write hogs a substantial ammount of CPU, so if you were to attempt to write e.g. a 4MiB file out the serial port, assuming your VISA time-out is sufficiently long for that size transfer, the write would succeed, but you would see ~50% CPU from this one thread alone and (depending on baud rates) it could remain at that level for a verrry long time.   So, by transferring smaller segments at a time, I can arbitrarily insert delays between segments to let the CPU sleep (at the expense of longer transfer times).  The first inner case shown that opens the file only runs for new transfers, the open file ref is kept on a shift register in the calling VI.  The 'get file offset' function after the read was just something I was looking at during (continued) development, and not required for the functionality that I'm describing.
    QFang
    CLD LabVIEW 7.1 to 2013

  • Working with Large Blobs in BerkeleyDB

    Hi Everyone,
    I'm trying to use BerkeleyDB as a simple key/value store for large (~50 MB) blobs of data.
    Before I get too deep into this, I wanted to see if anyone had any advice as to the best way to configure Berkeley to do this efficiently. Some of the areas I could imagine being important are...
    1.) Whether or not to use a BTree index.
    2.) How cache and page sizes should be configured.
    3.) If the blobs are very large but divisible, are there any best practices around partitioning?
    Thanks,
    Caleb

    Hi,
    That's an interesting question. When Berkeley DB stores data items they are broken up into "page size" pieces, and stored on a number of different pages. The best advice is to configure large page sizes when dealing with larger data items, that will result in us splitting the data into less pieces, thus reducing the cost of re-assembling the data to service read requests.
    Do you really need to store this actual data in a database file? Would it be practical for you to just keep metadata and reference information in the database, and store the actual blob data on disk? That configuration will likely deliver far superior performance - though you do loose the ACID guarantees on the blob data.
    Answers to your specific questions:
    1.) Whether or not to use a BTree index.
    I would use a btree database (index) for your situation.
    2.) How cache and page sizes should be configured.
    A cache that can hold the entire data set, and the largest possible page size (64k) would be my recommended configuration.
    3.) If the blobs are very large but divisible, are there any best practices around partitioning?
    I'd say that you are best to split the blobs into smaller data items in that scenario. The smaller the better :)
    Regards,
    Alex Gorrod
    Oracle Berkeley DB

  • Speed up Illustrator CC when working with large vector files

    Raster (mainly) files up to 350 Mb. run fast in Illustrator CC, while vector files of 10 Mb. are a pain in the *blieb* (eg. zooming & panning). When reading the file it seems to freeze around 95 % for a few minutes. Memory usage goes up to 6 Gb. Processor usage 30 - 50 %.
    Are there ways to speed things up while working with large vector files in Illustrator CC?
    System:
    64 bit Windows 7 enterprise
    Memory: 16 Gb
    Processor: Intel Xeon 3,7 GHz (8 threads)
    Graphics: nVidia Geforce K4000

    Files with large amounts vector points will put a strain on the fastest of computers. But any type of speed increase we can get you can save you lots of time.
    Delete any unwanted stray points using  Select >> Object >> stray points
    Optimize performance | Windows
    Did you draw this yourself, is the file as clean as can be? Are there any repeated paths underneath your art which do not need to be there from live tracing or stock art sites?
    Check the control panel >> programs and features and sort by installed recently and uninstall anything suspicious.
    Sorry there will be no short or single answer to this, as per the previous poster using layers effectively, and working in outline mode when possible might the best you can do.

  • Is there a way to get vmmark score with more than 8 VMs/ tile?

    I want to run VMmark tool not only with 8 standard workload VMs but few more VMs added to that. Is it possible to get scores for those VMs as well, can anybody please provide some help on to how to do that?
    Is there any reference document available, to customize VMmark?

    What kind of "extra" VMs are you considering?  Remember that any runs where you change the workload mix will be non-compliant and cannot be used for public disclosure.

  • Problems with large scanned images

    I have been giving Aperture another try since 1.1 came out, and I am still having problems with large tiff files derived from scanned 4x5 negatives. The files are 500mb or more, 16 bit RGB, with ProPhoto RGB or Ektaspace PS5 profiles, directly out of the scanner.
    Aperture imports the files correctly, and shows their thumbnails. When I select a thumbnail "Loading" is displayed briefly, and the the dreaded "Unsupported Image Format" is displayed. Sometimes "Loading" goes on for a while, and a geometric pattern (looking like a rendering of random memory) is displayed. Restarting Aperture doesn't help.
    Lower resolution (250mb, 16bit) files are handled properly. The scans are from an Epson 4870 scanner. I have tried pulling the scans into Photoshop and resaving with various tiff options, and as PSD with no improvement. I have the same problem with corrected/modified psd files coming out of Photoshop CS2.
    I am running on a Power Mac G5 dual 2ghz with 8gb of RAM and an NVIDIA GeForce 6800 GT DDL (250mb) video card, with all the latest OS and software updates.
    Has anyone else had similar problems? More importantly, is anyone else able to work with 500mb files of any kind? Is it my system, or is it the software? I sent feedback to Apple as well.
    dual g5 2ghz   Mac OS X (10.4.6)  

    I have a few (well actually about 100) scans on my system of >500Mb. I tried loading a few and am getting an inconsistent pattern of errors that correlates with what you are reporting.
    I imported 4 files and three were troubled, the fouth was OK. I imported another four files and the first one was OK and the three others had your reported error, also the previously good file from the first import was now showing the same 'unsupported' image' message.
    I would venture to say that if you shoot primarily 4x5 and work with scans of this size that Aperture is not the program for you--right now. I shoot 35mm and have a few images that I have scanned at 8000dpi on my Imacon 848 but most of my files are in the more reasonable 250Mb range (35mm @ 5000dpi).
    I will probably downsample my 8000dpi scans to 5000dpi and not worry to much about it. In a world where people believe that 16 megapixels is hi-res you are obviously on the extreme side.(Good for you!) You should definately file a bug report but I wouldn't expect much help anytime soon for your super-sized scans.

Maybe you are looking for

  • Using an external DAC with my iTunes library

    There are a lot of links on the web to people incorporating external DAC's in using their iTunes music library as a jutebox for your home stereo and I wanted to share my experience as I think it may help some of you in the decision making process. I

  • Use of field TEXT_FIELDNAME in Field Catalog for ALV

    Hi Experts, Could any one pls  tell me the relevance (purpose) of the field TEXT_FIELDNAME in Field catalog for an ALV ? How is it different from the fields such as seltext_l, seltext_m, seltext_s ? When is it necessary to assign a value to this fiel

  • Problem with date format using TEXT_CONVERT_XLS_TO_SAP

    I'm using fm TEXT_CONVERT_XLS_TO_SAP to upload an xls file. I've the following problem the date in the spreadsheet is 01.01.2010 the result in the internal table after using the fm is 01.jan.2010. What must i do to get 01.01.2010 in the internal tabl

  • How to view SDO data graphically?

    I have a functioning spatial database, but now I want to view the data to make sure it looks like what I think it should be. I do not want to use any middleware like ArcSDE etc. Is there any alternatives besides MapViewer? If not, is there any easy s

  • I need some help connecting to LDAP please.

    I've been working on this problem for a few days now and I'm really frustrated. I'm going to search through these message boards and hope some topics are of any use. If anyone knows of any please post the links in this thread that would be very helpf