PSD v's Layered TIF (on large volumes)

platform is windows XP, InDesign CS
we currently have 750GB volumes (windows 2003 servers) where the Indesign document and all graphic files reside.
we are moving data to a new server (NetAPP Storage) the volume size is 3TB. We have noticed working with Indesign documents that have placed PSD files is slow, slow to place PSDs, slow to print to a printer or file. However if we save all the graphic elements as Layered TIF files then it all becomes quicker. Is there an issue with using LARGE volumes (or shares)? I ask this as we don't have speed issues with smaller volume sizes.
We have also noted that when using InDesign CS3 with placed PSD's that reside on large volumes times are better, however upgrading to CS3 for us is not possible at this time as our desktop workstations are under speced. Hardware updates are planned but will take many months to implement.

No these are not large files, they range between 5 to 80 mb in size. Adobe inform me that InDesign CS3 handles PSD files better. But I'm still very curious as to why is it when I have migrated data from a small volume to a very large volume is InDesign afected by it, that is whenever PSD files are involved.

Similar Messages

  • Bridge CS6 re-extracting layered tif previews

    The new Bridge is constantly starting over building thumbnails for a collection of layered tif files I have. I can select a different folder and come back, or if Bridge is in the background and I bring it foward, it starts extracting thumbnails for all the files again.

    It defininetly finishes the extraction before I try anything else. I can set it to a folder and switch to another program and see that it finishes, then click on Bridge and it starts extracting again, every single time I switch away and back. If it is in the middle of extracting a folder and I switch to Bridge it starts all over again. The thumbnails themselves stay the high quality extracted versions and I can instantly full-screen-preview any file, but at the bottom of the window it displays that it is extracting the thumbnails and when it finishes it does that color shift thing for all the thumbnails when it normaly finishes extracting.
    I was hoping they would fix the performance for layered tif files as well since it is glacial compared to extracting .psd's.

  • Photoshop CS3 shuts down after opening a PSD file with layers

    Hello, I have met a problem with Photoshop CS3 (Windows XP SP2, no network printer) - after opening a PSD file with layers (saved with CS2) Photoshop shuts down. No error message.
    Can some of you help me? (The only software that I have installed after Creative Suite CS3 is Corel X4. I have installed no upgrades of CS3.)

    > I have installed no upgrades of CS3
    You should!
    Even if it doesn't help here you should.

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Processing large volume of idocs using BPM Processing

    Hi,
    I have a scenario in which SAP R/3 sends large volume say 30,000 DEBMAS Idocs to XI. XI then sends data to 3 legacy systems using jdbc adapter.
    I created a BPM Process which waits for 4 hrs to collect all the idocs. This is what my BPM does:
    1. Wait for 4 hrs Collect the idocs
    2. For every idoc do a IDOC->JDBC Message transformation.
    3. Append to a Big List
    4. Loop at the Big list from step 4 and in the loop for
    5. Start counter from 0 and increment. Append to a Small List.
    6. if counter reaches 100 then send a Batch JDBC Message in send step.
    7. Reset counter after every send.
    8. Process remaining list i.e if there was an odd count of say 5300 idoc then the remaining 53 idocs will be sent in anther block.
    After sending 5000 idocs to above BPM following problems are there:
    1. I cannot read the workflow log as system does not respond.
    2. In the For Each loop which loops through the big list of say 5000 idocs only first pass of 100 was processed after that the workflow item is not moving ahead. It remains in the status as "STARTED" but I do not see further processing.
    Please tell me why certain Work Items are stuck is it becuase I have reached upper limit and is this the right approach? The Main BPM Process is also hanging from last 2 days.
    I have concerns about using BPM for processing such high volume of idocs in production. Please advice and thanks in advance.
    Regards
    Ashish

    Hi Ashish,
    Please read SAPs Checklist for proper usage of BPMs: http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    One point i'm wondering about is why do you send the IDocs out of R/3 one by one and don't use packaging there? From a performance stand point this is much better than a bpm.
    The SAP Checklist states the following:
    <i>"No Replacement for Mass Interfaces
    Check whether it would not be better to execute particular processing steps, for example, collecting messages, on the sender or receiver system.
    If you only want to collect the messages from one business system to forward them together to a second business system, you should do so by using a mass interface and not an integration process.
    If you want to split a message up into lots of individual messages, also use a mass interface instead of an integration process. A mass interface requires only a fraction of the back-end system and Integration-Server resources that an integration process would require to carry out the same task. "</i>
    Also you might want to have a look at the IDoc packaging capabilities within XI (available since SP14 i believe): http://help.sap.com/saphelp_nw04/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm
    And here is Sravyas good blog about this topic: /people/sravya.talanki2/blog/2005/12/09/xiidoc-message-packages
    If for whatever reason you can't or don't want to use the IDoc packets from R/3 or XI there are other points on which you can focus for optimizing your process:
    In the section "Using the Integration Server Efficiently" there is an overview on which steps are costly and which steps are not so costly in their resource consumption. Mappings are one of the steps that tend to consume a lot of resources and unless it is a multi mapping that can not be executed outside a BPM there is always the option to do the mapping in the interface determination either before or after the BPM. So i would sugges if your step 2 is not a multi mapping you should try to execute it before entering the BPM and just handle the JDBC Messages in the BPM.
    Wait steps are also costly steps, so reducing the time in your wait step could potentially lead to better performance. Or if possible you could omitt the wait step and just create a process that waits for 100 messages and then processes them.
    Regards
    Christine

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • Cannot Print PDF Large volume PDF.  Internal Error: 8004, 6343724, 8484240, 0

    We are having an issue with printing to PDF for large volume FM files.  PDF appear to work for small/medium volume PDFs however. 
    Internal Error: 8004, 6343724, 8484240, 0
    FrameMaker 8.0.0 for Intel
    Build: 8.0p273
    I've checked the default printer and it is set to Adobe PDF.  I've also tried to print to PDF to a different FM file just in case the current file might be corrupted and the same error appears.  At this point this issue is affecting our work as we cannot save it to PDF.  Does anyone have any ideas that I can try?
    Thanks
    Steve

    Shelia,
    Thank you for your reply.  Here are some further details on the question that you asked:
    1. At that beginning, because the PDF file has 226 pages as total and consists of 19 fm files, which I mean the big volume, I thought this problem occurs.  (FYI, the small/medium volume means the PDF file has 112 pages as total and consists of 16 fm files / only around 20 pages as total and consists of only 1 fm file.)
    However, now, I know the problem occurs in the specific two very small fm files.  These two fm files are only 3 pages / 15 pages as total and consists of only one fm file.  – So, I do not think the volume relates to this problem.
    2. I always use “save it to PDF” to create PDF files so far.  I am using the PDF distiller that came with FrameMaker8.
    3. All files including graphics are in my local drive (C drive).
    So I'm wondering if the problem is possible due to the two specific fm files.  If so, what are your recommendations to get around this?
    Thanks
    Steve

  • Split of a Large Volume Outbound Idoc

    Hi,
    anyone who can tell me how split of a large volume outbound idoc works?
    /Elvez

    Hi Elvez,
    One way to split your idoc is to group your idoc into different segments
    You can create segments and group your data logically.
    Go through the following link. It will give you good tips on IDOCs
         http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
      other helpful links are...
         ALE/ IDOC/ XML
         http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
         http://www.thespot4sap.com/Articles/SAP_XML_Business_Integration.asp
         http://help.sap.com/saphelp_srm30/helpdata/en/72/0fe1385bed2815e10000000a114084/content.htm
    Good luck ..Reward me for the same
    Thanks
    Ashok

  • Create a GPT partition table and format with a large volume (solved)

    Hello,
    I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
    I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!
    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/md126.
    I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and  2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
    Some information:
    fdisk -l
    Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd9a6c0f5
    Device Boot Start End Blocks Id System
    /dev/sda1 * 2048 104861695 52429824 83 Linux
    /dev/sda2 104861696 466567167 180852736 83 Linux
    /dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x5ffb31fc
    Device Boot Start End Blocks Id System
    /dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
    Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 65536 bytes / 131072 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/md126p1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    gdisk -l on my RAID volume (/dev/md126):
    GPT fdisk (gdisk) version 0.8.7
    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present
    Found valid GPT with protective MBR; using GPT.
    Disk /dev/md126: 11720982528 sectors, 5.5 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 11720982494
    Partitions will be aligned on 8-sector boundaries
    Total free space is 6077 sectors (3.0 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 34 262177 128.0 MiB 0C01 Microsoft reserved part
    2 264192 33032191 15.6 GiB 0700 Basic data partition
    3 33032192 11720978431 5.4 TiB 0700 Basic data partition
    To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
    So if anyone has any suggestion that would help me out with this, I'd be glad to read.
    Cheers
    Last edited by Rolinh (2013-08-16 11:16:21)

    Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
    Therefore, I used the mdadm utility to create my RAID 5:
    # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
    # mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
    It works like a charm.

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • How can I open a psd file with layers in touch?

    I want to edit a psd file with layers in photoshop touch. Is this possible?

    No. (Kind of, sort of.) You can export PS Touch projects for import into desktop Photoshop and have the layers intact for further editing but you cannot do the reverse; any imported PSD will be automatically flattened. (I'm guessing this was done to prevent crashing. PS Touch supports 12 layers max vs. the potential thousands available in desktop Photoshop.)

  • Managing large volumes of images

    I have a large volume of images. Mostly in raw format. I almost lost them all a few years ago when something happened to iPhoto 06. Since that time I avoided iPhoto and have been managing the file structure myself and using Lightroom.
    All my images are now stored on a NAS running Raid 0. I am feeling a little more secure now, so....
    ...I am interested to know what database improvements have been made to iPhoto. Is it save to use with that much data? Does it work well with Lightroom? How does it work with Aperture or does Aperture just replace iPhoto? Can the iPhoto or Aperture database reside on my NAS?
    Cheers.

    1. The protection against any database failure is a good current back up. iPhoto makes a automatic back up of the database file. This facilitates recovery from issues. However this is not a substitute for a back up of the entire Library.
    2. The number of images is what's important to iPhoto, not the total file size. iPhoto is good for 250k images and we've seen reports on here from folks with more than 100,000. So it will work with volume.
    3. It doesn't work with Lightroom at all. This is germane to Aperture as well.
    iPhoto, Lightroom and Aperture are all essentially Database applications. All of them want to manage the data and will only share the data under certain protocols.
    Aperture and Lightroom don't actually edit photos. When you process a photo in these apps, the file is not changed. Instead, the decisions are recorded in the database and applied live when you view the pic. With Lightroom the only way to get an edited image to iPhoto is to Export from LR and then import to iPhoto. (There was a plug-in to automate that process but I have no idea if it's been updated since LR 1.)
    Aperture can share it's Previews with iPhoto, but that's all. Otherwise you need to do the export/import dance.
    What communication there is between Aperture and iPhoto is designed to facilitate upgrading from iPhoto to Aperture. Yes, Aperture is a complete replacement for iPhoto.
    Neither the iPhoto nor Aperture Libraries can live on your NAS. However, the file management tools in Aperture are such that you can easily store the files on your NAS while the Library is on the HD. You can also do this with iPhoto but I wouldn't recommend it.
    Frankly, if you're a Raw shooter I don't understand why you would consider changing from the Pro level LR to a hiome user's iPhoto.
    Regards
    TD

  • Can't highlight and delete or move large volumes of emails. Have to hold command to select multiple and click on each one ?

    I love being able to mass delete, mark as read or move emails but with Mountain Lion I now have to select each email individually and cannot just hold mouse and roll up and select in large volumes. I can select more than one only if holding down command and clicking on each email. Is there a way around this? What has changed?
    This is terrible and completely time consuming to delete emails.

    Found solution. Must double click first and then scroll!

Maybe you are looking for

  • Can I use a .jar file as a part of my program?

    i.e. Can I import compiled classes from a .jar file. I want some functions in the .jar file. Would this be ok? If yes, how? I tried to import them by a statement like this: import com.something.otherthing.xxx; But it doesn't work! My environment is J

  • Windows 8 error code

    Trying to load recovery disk for Win 8 receive error code OxeOef0003.  What does the code indicate?

  • JConsole fails to connect in local

    Hello Forum I need to use Jconsole to profile my application but it fails. My System is running under Linux Ubuntu 8.10 with sun-jdk 1.6 package I execute in a first console: java -jar myApp.jarThen I launch JConsole. When I tried to connect to myApp

  • Help with Enterprise Manager v9.2

    While using Enterprise Manager console (with Windows XP), I have experienced abrupt termination by the Client/Console when I try to view tables within the Schema section (by clicking on a table name)...it just closes on screen before my eyes. This do

  • Jcontrol.exe error

    Hello All, I installed the Test Portal server in my local machine. But once i stop my server and again i am starting server then i am getting jcontrol.exe is stopped. for that what i need to do ? is any configuration is required ? can you please help