Regarding compression of cibes.

Hi All,
I got one supporting project.
They already have cubes.They didnt compressed.
Now i want to compress the cubes.How to compressing the cubes.
I need step by step details.
Thanks
Vasu.

HI,
In your cube > right click > manage screen > go to the Collapse tab,
On the Collapse tab click the  Request ID button. Here you can specify which requests,you  want to compress. Then you can click the Release button to start the compression.
also check this link
http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
hope this helps
suresh

Similar Messages

  • Regarding compression in ACE

    Hi All,
    I need to configure the Compression policy in ACE.
    Pls can any ne explain me how to do the compression policy and troublshooting the same.
    Thanks in advance.
    Regards,
    Lingaraj

    Hi Lingaraj,
    You can configure compression by entering the compress command under a Layer 7 load-balancing policy. For example:
    class-map type http loadbalance match-any L7default-compression- exclusion-mime-type_CLASS
    description Classmap for default SLB compression exclusion mime-types.
    2 match http url .*gif
    3 match http url .*css
    4 match http url .*js
    policy-map type loadbalance first-match L7_COMP_SLB_POLICY
    class L7default-compression-exclusion-mime-type_CLASS
       serverfarm SFARM1
    class class-default
       serverfarm SFARM1
       compress default-method deflate 
    You can create the http parameter map and apply that pmap to a policy map.
    parameter-map type http  compression
        compress minimum-size xxx
    Regards,
    Kanwal

  • FCP 7 regarding compression

    I'm relatively new to FCP 7...
    I have a question about this part...if you look at the bottom right section...there was a file that i had to compress in order to get vimeo video up...
    but it was done compressing...it came out "failed." I was still able to load it up...but i was wondering why it came out as "failed."
    Any help?

    try opening the disclosure triangle in the history window next to "publish to apple tv."  MIght give you some info.
    Also, you might have more info in the batch monitor app which is in applications:  utilities

  • Question regarding COMPRESSION

    Hi Experts,
    I have created a table a normal table without compression as shown below:
    SQL> CREATE TABLE t pctfree 50 as select * from all_objects;
    Table created.
    SQL> insert into t select * from t;
    41261 rows created.
    SQL> insert into t select * from t;
    82522 rows created.
    SQL> commit;
    Commit complete.
    SQL> select compression from user_tables where table_name = 'T';
      PCT_FREE COMPRESS
            50 DISABLED
    SQL> column segment_name format a20
    SQL> select segment_name, bytes/1024, blocks from user_segments where segment_name in ('T');
    SEGMENT_NAME         BYTES/1024     BLOCKS
    T                         33792       4224Now, I will compress this table:
    SQL> alter table t move compress;
    Table altered.
    SQL> select pct_free, compression from user_tables where table_name = 'T';
      PCT_FREE COMPRESS
             0 ENABLED
    SQL> select segment_name, bytes/1024, blocks from user_segments where segment_name in ('T');
    SEGMENT_NAME         BYTES/1024     BLOCKS
    T                          6144        768Oracle has indeed compressed the data. But, for some reason, data has been inserted into this table without using APPEND hint and the result is that the table now contains both compressed and uncompressed data.
    SQL> insert into t select * from t;
    165044 rows created.
    SQL> select pct_free, compression from user_tables where table_name = 'T';
      PCT_FREE COMPRESS
             0 ENABLED
    SQL> select segment_name, bytes/1024, blocks from user_segments where segment_name in ('T');
    SEGMENT_NAME         BYTES/1024     BLOCKS
    T                         22528       2816
    SQL>Is there a way to identify that there exists uncompressed data, without rebuilding the table?
    Regards

    Hi Chris,
    Thanks for answering and giving a new direction.
    But unfortunately, when I dump a block, I am unable to locate "ntab" entry in the dump file.
    Here's how I did:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    SQL> create table t compress as select * from all_objects;
    Table created.
    SQL> create table t2 as select * from all_objects;
    Table created.
    SQL> select segment_name, bytes, blocks from user_segments where segment_name in ('T', 'T2');
    SEGMENT_NAME              BYTES     BLOCKS
    T                       2097152        256
    T2                      5242880        640
    SQL> select file_id, block_id, blocks from dba_extents where owner = 'TEST' and segment_name = 'T';
       FILE_ID   BLOCK_ID     BLOCKS
             6         33          8
             6         65          8
             6         73          8
             6         81          8
             6         89          8
             6         97          8
             6        105          8
             6        113          8
             6        137        128
             5         89          8
             5         97          8
             5        113          8
             5        129          8
             5        393          8
             5        401          8
             5        433          8
             5        473          8
    17 rows selected.
    SQL> alter system dump datafile 6 block 33;
    System altered.
    SQL> select file_id, block_id, blocks from dba_extents where owner = 'TEST' and segment_name = 'T2';
       FILE_ID   BLOCK_ID     BLOCKS
             6        121          8
             6        129          8
             6       5385          8
             6       5393          8
             6       5401          8
             6       5409          8
             6       5417          8
             6       5425          8
             6       5433          8
             6       5473          8
             6       5481          8
             6       5489          8
             6        265        128
             6        393        128
             5        489          8
             5        497          8
             5        505          8
             5        513          8
             5        137        128
             5        265        128
    20 rows selected.
    SQL> alter system dump datafile 6 block 121;
    System altered.and following are the dump files:
    Dumpfile for the compressed table T:
    Dump file c:\db10g\udump\db10g_ora_4140.trc
    Tue Jul 08 11:30:33 2008
    ORACLE V10.2.0.4.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows XP Version V5.1 Service Pack 2
    CPU : 2 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:149M/1013M, Ph+PgF:483M/2347M, VA:1101M/2047M
    Instance name: db10g
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 4140, image: ORACLE.EXE (SHAD)
    *** 2008-07-08 11:30:33.161
    *** ACTION NAME:() 2008-07-08 11:30:33.129
    *** MODULE NAME:(sqlplus.exe) 2008-07-08 11:30:33.129
    *** SERVICE NAME:(SYS$USERS) 2008-07-08 11:30:33.129
    *** SESSION ID:(37.18585) 2008-07-08 11:30:33.129
    Start dump data blocks tsn: 104 file#: 6 minblk 33 maxblk 33
    buffer tsn: 104 rdba: 0x01800021 (6/33)
    scn: 0x0000.01e88176 seq: 0x03 flg: 0x00 tail: 0x81762003
    frmt: 0x02 chkval: 0x0000 type: 0x20=FIRST LEVEL BITMAP BLOCK
    Hex dump of block: st=0, typ_found=1
    Dump of memory from 0x098A3E00 to 0x098A5E00
    98A3E00 0000A220 01800021 01E88176 00030000 [ ...!...v.......]
    98A3E10 00000000 00000000 00000000 00000000 [................]
    Repeat 2 times
    98A3E40 00000000 00000000 00000000 00000004 [................]
    98A3E50 FFFFFFFF 00000000 00000003 00000010 [................]
    98A3E60 00010002 00000000 00000000 00000000 [................]
    98A3E70 00000000 00000010 00000000 00000000 [................]
    98A3E80 00000000 00000000 00000000 00000000 [................]
    98A3E90 01800022 00000000 00000000 00000000 ["...............]
    98A3EA0 00000000 00000000 00000000 00000000 [................]
    Repeat 1 times
    98A3EC0 0001073A 00000000 00000000 01800021 [:...........!...]
    98A3ED0 00000008 00000000 01400059 00000008 [........Y.@.....]
    98A3EE0 00000008 00000000 00000000 00000000 [................]
    98A3EF0 00000000 00000000 00000000 00000000 [................]
    Repeat 8 times
    98A3F80 00000000 00000000 00000000 11111111 [................]
    98A3F90 11111111 00000000 00000000 00000000 [................]
    98A3FA0 00000000 00000000 00000000 00000000 [................]
    Repeat 484 times
    98A5DF0 00000000 00000000 00000000 81762003 [............. v.]
    Dump of First Level Bitmap Block
    nbits : 4 nranges: 2 parent dba: 0x01800022 poffset: 0
    unformatted: 0 total: 16 first useful block: 3
    owning instance : 1
    instance ownership changed at
    Last successful Search
    Freeness Status: nf1 0 nf2 0 nf3 0 nf4 0
    Extent Map Block Offset: 4294967295
    First free datablock : 16
    Bitmap block lock opcode 0
    Locker xid: : 0x0000.000.00000000
    Inc #: 0 Objd: 67386
    DBA Ranges :
    0x01800021 Length: 8 Offset: 0
    0x01400059 Length: 8 Offset: 8
    0:Metadata 1:Metadata 2:Metadata 3:FULL
    4:FULL 5:FULL 6:FULL 7:FULL
    8:FULL 9:FULL 10:FULL 11:FULL
    12:FULL 13:FULL 14:FULL 15:FULL
    End dump data blocks tsn: 104 file#: 6 minblk 33 maxblk 33
    Dumpfile for the UNcompressed table T2:
    *** 2008-07-08 11:31:38.038
    Start dump data blocks tsn: 104 file#: 6 minblk 121 maxblk 121
    buffer tsn: 104 rdba: 0x01800079 (6/121)
    scn: 0x0000.01e881db seq: 0x03 flg: 0x04 tail: 0x81db2003
    frmt: 0x02 chkval: 0x81ad type: 0x20=FIRST LEVEL BITMAP BLOCK
    Hex dump of block: st=0, typ_found=1
    Dump of memory from 0x098A3E00 to 0x098A5E00
    98A3E00 0000A220 01800079 01E881DB 04030000  [ ...y...........]
    98A3E10 000081AD 00000000 00000000 00000000  [................]
    98A3E20 00000000 00000000 00000000 00000000  [................]
            Repeat 1 times
    98A3E40 00000000 00000000 00000000 00000004  [................]
    98A3E50 FFFFFFFF 00000000 00000003 00000010  [................]
    98A3E60 00010002 00000000 00000000 00000000  [................]
    98A3E70 00000000 00000010 00000000 00000000  [................]
    98A3E80 00000000 00000000 00000000 00000000  [................]
    98A3E90 0180007A 00000000 00000000 00000000  [z...............]
    98A3EA0 00000000 00000000 00000000 00000000  [................]
            Repeat 1 times
    98A3EC0 0001073B 00000000 00000000 01800079  [;...........y...]
    98A3ED0 00000008 00000000 014001E9 00000008  [..........@.....]
    98A3EE0 00000008 00000000 00000000 00000000  [................]
    98A3EF0 00000000 00000000 00000000 00000000  [................]
            Repeat 8 times
    98A3F80 00000000 00000000 00000000 11111111  [................]
    98A3F90 11111111 00000000 00000000 00000000  [................]
    98A3FA0 00000000 00000000 00000000 00000000  [................]
            Repeat 484 times
    98A5DF0 00000000 00000000 00000000 81DB2003  [............. ..]
    Dump of First Level Bitmap Block
       nbits : 4 nranges: 2         parent dba:  0x0180007a   poffset: 0    
       unformatted: 0       total: 16        first useful block: 3     
       owning instance : 1
       instance ownership changed at
       Last successful Search
       Freeness Status:  nf1 0      nf2 0      nf3 0      nf4 0     
       Extent Map Block Offset: 4294967295
       First free datablock : 16    
       Bitmap block lock opcode 0
       Locker xid:     :  0x0000.000.00000000
       Inc #: 0 Objd: 67387
      DBA Ranges :
       0x01800079  Length: 8      Offset: 0     
       0x014001e9  Length: 8      Offset: 8     
       0:Metadata   1:Metadata   2:Metadata   3:FULL
       4:FULL   5:FULL   6:FULL   7:FULL
       8:FULL   9:FULL   10:FULL   11:FULL
       12:FULL   13:FULL   14:FULL   15:FULL
    End dump data blocks tsn: 104 file#: 6 minblk 121 maxblk 121
    Database:  Oracle 10g Release 2 (10.2.0.4)
    OS:           Windows-XPYour guidance is appreciated.
    Thanks
    Regars

  • Doubt in compression of tables

    Hello,
    I have a doubt regarding compression of tables.
    I just compressed three tables in my Database and took a dump of the compressed tables. And later i took the dump of the same uncompressed tables (having same data as compressed table). But there was no difference in the file size of the two dumps..Both the dump having compressed tables and uncompressed tables are of the same size, infact the dump of compressed tables is slightly larger by some MBs compared to dump of uncompressed tables..
    My doubt is why is the size of the dumps not varied..Will compressing does not change the physical size of the data file??

    Table compression is supported (and useful) in following cases:
    1) direct path sqlloader,
    2) create table as select
    3) parallel inserts or inserts with append hint
    4) single-row or array insert and updates
    If you want to compress dump data, use compression feature of datapump (11g)
    Note: advanced compression is an SE/EE option and needs an extra licence .
    Werner

  • Urgent::Compression Job taking long time???

    Can anybody know regarding compression of cube that how much time it should take for around 15,000 records in cube.
    For Us ,it is taking around 3-4 hrs?
    How can we finish it early????
    We have around 1900 request in Cube  .And each request having around 10,000 records.
    So if we go likewise ,than it will be very time consuming ,decrease performance of other loads and very boring???
    pls give ur suggestions??
    thanx in advance...

    Hi Sonika ,
    Pls find my answer in front of ur q?
    Please check the
    1.all availability of the background processes in sm50. NO
    Ans--only one job is running
    2. please check st04 ->detail analysis menu -> oracle session ..check is there any locked memory thr.
    Ans--No locked memory
    3. check in sm12 that ur cube is locked
    Ans- no locked
    3. please check any back up is going on in db12 (if u r authorized)
    Ans-No Back up is running
    4. check table spaces in DB02
    Ans-Which table space,i mean table name

  • Rollup and Compression in PC

    Guys, Let's say If I have an infocube which has aggregates created on it.  In a process chain how would the data flow look like?
    Is this one correct??
    START > DROP INDEXES > LOAD > CREATE INDEX > Roll Up of Filled Aggregates/BIA Indexes
    Also what about the compression?
    START > DROP INDEXES > LOAD > CREATE INDEX > Compression of the InfoCube
    Thanks,
    RG

    Hi,
    Yes, this is the process.
    but regarding compression, it comes after roll up of aggregates(if aggregates exist)
    hope this helps

  • When to compress Info Cube Data?

    Dear Gurus,
    I am mohan.  I have a doubt regarding compression of Info Cube data.  When will we get a situation to compress the info cube data.  Is BW statistics used in taking this decision? Is it to be done by analysing the query runtime in the Transaction ST03?  Please give me detailed information.
    Best Regards
    Mohan Kumar

    Hi,
    For performance reasons, and to save space on the memory, summarize a request as soon as you have established that it is correct, and is no longer to be removed from the InfoCube.
    Note : the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    -Shreya

  • Audio and video compression preferences for export to DVDSP

    Hi,
    I have a few questions here. The first is regarding the internal use of compressor by DVD Studio Pro. It is my understanding from (limited experience with) past projects that quicktime movie files are automatically compressed to an mpeg2 format when imported into DVDSP. To what extent do I have control over the level of compression, and are the default mpegs that were made optimal quality? Where can I check this?
    My second question is regarding compression of audio files. Whereas my video files were automatically compressed, I am not so sure about the audio. I was forced to compress the audio down to Dolby2 separately using the external compressor program, in order not to exceed the bitrate on my last project. Was this just a question of one or the other (video/audio) needing further compression than the default amount, or is there more to this issue?
    I live and work with video in Europe, where I have recently been told that optimal audio is mp3? 4? rather than Dolby 5.1? Is this correct, and how much more space does dolby 5.1 take than dolby2? How significant an issue is this when making dvd's for musicians?
    Lastly, can someone recommend the best quality audio-video compression solution for a dvd containing approx 40 minutes of audio and video footage for use in europe?? or explain how I might go about finding this out for myself?
    Thanks a lot, and I look forward to reading your comments.
    4 x 2.5 GHz PowerPC G5   Mac OS X (10.4.6)   8 GB DDR2 SDRam
    4 x 2.5 GHz PowerPC G5   Mac OS X (10.4.6)   8 GB DDR2 SDRam

    It is my understanding from (limited experience with) past projects that quicktime movie files are automatically compressed to an mpeg2 format when imported into DVDSP. To what extent do I have control over the level of compression, and are the default mpegs that were made optimal quality? Where can I check this?
    Settings for DVDSP's internal encoder can be adjusted in DVDSP preferences. Press Command-, - that's the Apply and the comma keys - to bring those up, then select the Encoder tab. In your case, make sure you adjust the settings for SD DVD (and not HD-DVD). You will also be able to specify whether to encode in the background (basically, as soon as you import your file) or encode on build, at the bottom of that tab.
    I was forced to compress the audio down to Dolby2 separately using the external compressor program, in order not to exceed the bitrate on my last project. Was this just a question of one or the other (video/audio) needing further compression than the default amount, or is there more to this issue?
    I can't confess to have read any of your previous threads, but if you exceeded bit rate, that is a common reason to need to use Dolby compression for your audio. (unless you can afford to recompress your video files)
    That is, if you video files use a high bit rate (say over 6 Mbps average), then it's usually necessity to use Dolby compression to make sure that all your footage fits on a single disc. (this is a simplification, of course, but I think you get the idea)
    Or was there more to this question that I'm missing?
    I live and work with video in Europe, where I have recently been told that optimal audio is mp3? 4? rather than Dolby 5.1? Is this correct, and how much more space does dolby 5.1 take than dolby2? How significant an issue is this when making dvd's for musicians?
    Someone was mistaken when they told you that DVDs accept - let alone are optimized for - mp3 or mp4 files. That's flat out wrong.
    As for the difference between Dolby 5.1 vs 2.0, that depends on your encoding rates. Typically, most folks encoder Dolby 2.0 at 192 kbps, with some choosing to up the bit rate to 224 kbps. Beyond 224 kbps, you're not actually improving audio quality for a 2.0 mix, you're just bloating your file size. When it comes to 5.1 audio, it's typical to encode your audio at 384 or 448 kbps.
    When it comes to compressed audio vs uncompressed audio, it does matter when you're making DVDs for musicians. But that means that you'll probably need to lower the bit rate on your video files.
    Lastly, can someone recommend the best quality audio-video compression solution for a dvd containing approx 40 minutes of audio and video footage for use in europe??
    If you want to keep you audio as AIFF files, set your encode to 2 Pass VBR Best, with an Average bit rate of 5.0 Mbps and a Max bit rate of 7.0 Mbps. If you need things to happen a bit faster, use One Pass (not One Pass VBR) and use 5.5 Mbps as your bit rate.
    If you want to have a firm grasp over how all these numbers work, there is a section at the back of DVDSP manual, that explains how to calculate bit rates (what we call bit budgeting in the business). Give that a quick once-over if you can.

  • SQL 2008 Compression

    Hi All,
    Anyone know of a SAP note that will confirm SQL 2008 compression support and best practice regarding compression on BPC 7.5?
    Regards,
    Andries van den Berg

    Hi Andries
    I am not aware of any SAP Notes specific to BPC. SAP Note 1488135 makes reference to SQL Server Database compression for Netweaver.
    I've not seen it been used, unless you have space constraints, due to the fact that there are implications in terms of resource usage.I advise that you test it extensively, but compression is transparent to BPC, meaning that it is done on the database object layer. So if you issue the command "Select Col1 from tblFACTAPPName" and the table is compressed, you wouldn't have to amend you SQL syntax, "Select Col1 from tblFACTAppName" would work as expected.
    More about database compression :
    [http://blogs.msdn.com/b/sqlserverstorageengine/archive/2008/01/27/compression-strategies.aspx]
    Extract from
    [http://blogs.technet.com/b/josebda/archive/2009/03/31/sql-server-2008-database-compression.aspx]
    "Compression occurs in the storage engine and the data is presented to most of the other components of SQL Server in an uncompressed state. This limits the effects of compression on the other components to the following:
    Bulk import and export operations
    When data is exported, even in native format, the data is output in the uncompressed row format. This can cause the size of exported data file to be significantly larger than the source data.
    When data is imported, if the target table has been enabled for compression, the data is converted by the storage engine into compressed row format. This can cause increased CPU usage compared to when data is imported into an uncompressed table.
    When data is bulk imported into a heap with page compression, the bulk import operation will try to compress the data with page compression when the data is inserted.
    Compression does not affect backup and restore.
    Compression does not affect log shipping.
    Enabling compression can cause query plans to change because the data is stored using a different number of pages and number of rows per page.
    Data compression is supported by SQL Server Management Studio through the Data Compression Wizard."
    I hope this helps
    kind Regards
    Daniel

  • Compression Technique

     Hello I have a query regarding Compression technique in C# and its security....can u plz go thru this url nad suggeste whetherr its good or bad...is there any alternative solutions....?
    http://www.codeproject.com/Articles/223610/A-Simple-String-Compression-Algorithm
    Best Regards
    Latheesh K
    9747369936
    Latheesh K Contact No:+91-9747369936

    Try something like this:
    public static byte[] Compress(string s)
    var enc = System.Text.Encoding.UTF8;
    var ms = new MemoryStream();
    using (var cs = new GZipStream(ms, CompressionLevel.Optimal))
    var bytes = enc.GetBytes(s);
    cs.Write(bytes, 0, bytes.Length);
    return ms.ToArray();
    public static string Decompress(byte[] bytes)
    var enc = System.Text.Encoding.UTF8;
    var ms = new MemoryStream(bytes);
    using (var cs = new GZipStream(ms, CompressionMode.Decompress))
    var tms = new MemoryStream();
    cs.CopyTo(tms);
    return enc.GetString(tms.ToArray());
    static void Main(string[] args)
    var s = "Hello I have a query regarding Compression technique in C# and its security....can u plz go thru this url nad suggeste whetherr its good or bad...is there any alternative solutions....? ";
    var cs = Compress(s);
    var s2 = Decompress(cs);
    Console.WriteLine("passed {0}, string size {1}, compressed size {2}", s == s2, s.Length * 2, cs.Length);
    David http://blogs.msdn.com/b/dbrowne/

  • OS 10.5 Sharing – Files appear as "Read Only" – SOLUTION!

    I have been having problems with sharing files between two macs since upgrading to 10.5 Leopard. Files and catalogs were available on both machines and both were set to "read & write" however files would only open as "read only" and files could not be saved between macs.
    After an entire day of trying to find an answer (and talking to Apple Care for hours) I have found the solution. And it's a very simple fix. The UUID needs to be changed on the computer giving you trouble.
    Here's the solution from the original post below:
    In OS X 10.5, you can change the UUID for your account by going to the "Accounts" system preferences and clicking the lock to authenticate. Then right-click (control-click) your account name and select "Advanced Options." Then click the "Create New" button next to the UUID field, and a new number will be generated. This should be done on the computer that you cannot properly connect to, and not on the computer you are connecting from. All this will do is provide a new identifier for your account when you are logging in remotely, and should clear problems with authentication and permissions mismatches.
    Hope this helps!
    Here's the original post from MacFixIt (I am including the entire post so that anyone searching for these phrases or error messages can find it quickly):
    Apple Discussion poster "Thomas Kranz1" writes:
    "I am suddenly getting this message when I try to copy a file from my new MacBook Pro to my PowerBook running 10.4.9. 'You may need to enter the name and password for an administrator on this computer to change the item named [file I am trying to transfer] (stop, continue). I hit continue and get the error message: 'The item [that I am transferring] contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?' I say continue and get the error: 'The operation cannot be completed because you do not have sufficient privileges for some of the items.' My network was working fine until today. I can copy fine to all other computers on my network. But not this one."
    When OS X transfers data between computers, you first log into the remote system and mount a readable or writable sharepoint from the remote system. Then when you transfer files they should be given ownership and permissions based on the account you're using on the remote computer (not the current computer). This ensures you can access the files anytime and from anywhere as long as you use the same credentials to log into the computer containing them.
    When problems with permissions in file transfers occur, you should first check your sharepoints to ensure the account you are using has read-and-write access to the share. To do this, open the "Sharing" system preferences and select the "File Sharing" service. Then ensure the sharepoint is listed in the "Shared Folders" list and then give the username you're logging in with both "Read & Write" permissions. If you are logging in from a PC, then click the "Options" button and enable "Share files and folders using SMB" and then select the account you wish to use to log in. Clicking "Done" should allow you to log into the system using the specified credentials.
    If the system keeps presenting insufficient privileges errors even after supplying appropriate credentials, the problem may be with how the computer is authenticating the account being used. For network directory authentication, OS X uses a "Universally Unique Identifier" or "UUID" number to identify user accounts, and if there is a problem with the UUID not matching, you may still be able to log in locally to the remote computer but may have permissions problems when using the same credentials remotely. Apple started implementing network directories that use UUIDs in OS X 10.3, and in 10.5 they're required, since Apple changed from the NetInfo authentication technologies to "Open Directory," which is built for network directory authentication and implements UUIDs.
    In OS X 10.5, you can change the UUID for your account by going to the "Accounts" system preferences and clicking the lock to authenticate. Then right-click (options-click) your account name and select "Advanced Options." Then click the "Create New" button next to the UUID field, and a new number will be generated. This should be done on the computer that you cannot properly connect to, and not on the computer you are connecting from. All this will do is provide a new identifier for your account when you are logging in remotely, and should clear problems with authentication and permissions mismatches.
    For people having problems with copying files to a 10.4 machine, it may help to ensure ownership on the shared resource is properly set. To do this, launch the terminal and type the following command, followed by a single space:
    sudo chown -R `id -un`
    NOTE: The command uses "grave accent" marks (`), which is the character under the "tilde" next to the "1", and not apostrophes or single-quotes.
    After this has been typed (with the space following it), drag the shared folder to the terminal and the full path to the folder should complete. Then press enter and supply your password to complete the command. When this has been run, try copying the files again.

    David,
    Of course the files are checked in; otherwise I wouldn't be able to create a configuration on them and base a workarea on the configuration in order to download the files.
    Here's my problem: I check-in and out files while developing a release. A release for me is a set of (checked-in) objects and folders grouped in a checked-in configuration.
    Once I'm finished I want to download the release. In order to do this I base a 'DOWNLOAD' workarea on the configuration that represents the release.
    Then I download the DOWNLOAD workarea; and have all files in the release on my file system. But I do not want them to be READ-ONLY !!, since the folders and files that make up my application have to be moved and copied around a couple of servers using FTP scripts and this is a whole lot more complicated when the files and folders are read-only.
    If this cannot be disabled than Oracle SCM is a lot less attractive for our way of doing SCM... although I have been able to adjust all my Java FTP classes so that write-access is enabled on all file and folders after download and before publishing the application to all servers.
    By the way I use the Java API of Oracle SCM to download the contents of the DOWNLOAD workarea.
    Regards,
    Rinse Veltman
    CIBER Solution Partners

  • ACR 6.3RC still makes Bridge display some TIFFs & JPGs overexposed

    I had the same problem with TIFFs & JPGs after upgrading from ACR 6.1 to 6.2. My system is Win 7x64 and, of course, PsCS5 and Bridge 4.0.39. This time the upgrade was from ACR 6.1 to 6.3RC. Not all images are affected. The problem seems to be entirely random. The amount of overexposure seems to be in the order of 1 to 2 stops. The problem is limited in this way for me:
    1. All the affected images display correctly exposed when opened in Ps and ACR 6.3RC (and ACR 6.2) hosted by Bridge.
    2. Only TIFFs & JPGs (both B&W and Colour) "derived" ultimately from original TIFF scans (both B&W and Colour) made on a Nikon SuperCoolscan 5000 ED are affected. The original TIFF scans (99.99% are 16-bit) are all unaffected.
    3. The problem does not occur with any TIFF or JPG file derived from native CRW or NEF raw files or from native JPG files produced in-camera (Canon EOS 300D and Nikon D700), all of which are themselves unaffected.
    4. It does not matter were you might be in an image editing process. Some intermediate edits to TIFFs (I never make intermediate JPG edits) are affected, others are not. Editing might start with the TIFF and continue with edits to the TIFF, and only some of the subsequent TIFFs (or final JPGs derived from them) are affected. For example, there might be 6 distinct edits to a TIFF and only edits 3 through 6 are affected. Editing might procede immediately from the original TIFF scan to PSD, finalise with PSD, and then, AND sometimes only, any TIFF or JPG made from the final, flattened PSD produces a file which Bridge will display overexposed. Only some files in a batch of scans, all made at the same time and under exactly the same conditions (almost always with no adjustments made with the Nikon scanning software), are ever affected - never all of them.
    5. Some of the edits to the original TIFF scans (where the edits were kept as TIFFs) were made with Picture Window Pro, some with Lightroom 1.3 (I think), some with ACR 4.x, ACR 5.x, but most were made with PS CS3, CS4 and CS5: it does not matter, there are some affected files from each. The problem certainly does occur with files which have never been opened, let alone edited, in any version of ACR.
    6. I have several instances where simply making a copy of a TIFF scan in another folder (whether the copy was made by Bridge or with Windows Explorer has no influence) results in the copy only and not the original being displayed overexposed.
    7. If a file is affected, all TIFFs AND all JPGs derived from it will be overexposed, never just one or the other.
    The overexposure problem detailed above does not occur with ACR 6.1. All the files affected by ACR 6.2 and 6.3RC are displayed with correct exposure by Bridge when ACR 6.1 is installed.
    I'm at a complete loss to understand what is going on. Any suggestions would be most welcome.
    Has anyone else who was experiencing the Bridge/overexposure problem with either ACR 6.2 or 6.3RC found a solution?

    This will be posted in both the ACR and Bridge User Forums
    I've done some experimentation and found a workaround for, not the solution to, my problem with ACR 6.2 and 6.3RC causing Bridge to display random TIFFs and JPGs overexposed.
    This is a temporary, and, if you have lots of affected files, a very laborious, individual file workaround. Unless Adobe investigates, finds and corrects the cause of the problem, it seems likely to re-occur. Only fixing the cause of the problem (which must lie in the new (different from ACR 6.1) code of ACR 6.2 and 6.3RC) could be be truly regarded as a solution.
    Here is how I enabled Bridge to display my affected TIFFs and TPGs correctly, that is, with no amount of overexposure. These steps apply equally to B&W and Colour images.
    For a 16-bit TIFF, duplicate it in Bridge (Edit/Duplicate) and then open the duplicate in ACR 6.2 or 6.3RC (ACR) hosted by Bridge. Make sure that all the sliders are zeroed in the Main panel and that the Amount slider in the Sharpening section of the Details panel is also set to zero (to avoid unwanted additional sharpening). Then open the image in Photoshop (Ps) and immediately save the image (Save As) over itself (yes, you're replacing the file) back in the folder from which you opened it into ACR, making sure to choose no compression. The resaved duplicate will now be displayed with its proper exposure in Bridge, and any further TIFFs or JPGs (whether 16-bit or 8-bit) derived from this file will also be displayed correctly. Following this procedure, my invariable experience is that I end up with a TIFF either the same size as or smaller than the affected TIFF.
    Why no compression? Strangely, if you choose LZW compression (really the only logical alternative to "None") you end up with a larger file than you get with no compression. I have no idea why. Why not simply fix the affected file instead of a duplicate? No reason really, other than safety. If things go belly-up you still have your original, albeit, affected file. I also intend to keep my affected files for awhile just in case of some possible desire or need to re-examine them in the future.
    For an 8-bit TIFF, although I don't have any which are affected, the procedure would be the same as for a 16-bit TIFF. You will need to conduct your own tests regarding compression/no compression and the resulting file size.
    For a JPG, the procedure is pretty much the same (duplicating, opening in ACR, zeroing all sliders, opening in Ps and then saving over itself) but when saving try choosing Maximum Quality (12) which mostly, for me at least, produces a replacement JPG the same size as the original affected file. Of course, you can choose a greater level of compression if you wish, and my tests show that this will not adversely affect the outcome. You can use the replacement JPG to produce further, derivative 8-bit TIFFs or JPGs and all will display properly exposed in Bridge.

  • Reading ACE file format?

    Anyone know of open-source or commercial Java source code or libraries to read ACE-format files? (http://www.winace.com/). They do have a freely available DLL that'll read them... but that means integrating. Hmm.
    Thoughts? Thanks,
    dwh

    Anyone know of open-source or commercial Java source
    code or libraries to read ACE-format files?
    (http://www.winace.com/). They do have a freely
    y available DLL that'll read them... but that means
    integrating. Hmm.
    Well 'ol boy it looks as if you are in for a spot of JNI. Perhaps one could find a library to read ACE format files but that site declares that their files are in a proprietary format that is not compatible with the ACE format.
    "Please note:Although the compression used in both ACE Compressions Libraries offers the same performance regarding compression speed and ratio as the retail version of WinAce, the archive format used in these two Libraries is not fully compatible with the standard ACE format. "
    So tally-ho and off to native methods. Good luck chap.

  • Repository files downloaded as READ ONLY

    Hi,
    We are using the repository for SCM of file based non-Oracle applications. When I download files from a folder in the repository they occur on the filesystem as Read-Only.
    How can I prevent/control this? After download I want the files to be without any OS attributes (like Archive or Read-only).
    Serverside: Windows NT 4.0, Ora817, Repos6iR4 version 6.5.52.2.0
    Clientside: Windows NT 4.0
    TIA
    Rinse Veltman
    CIBER Solution Partners

    David,
    Of course the files are checked in; otherwise I wouldn't be able to create a configuration on them and base a workarea on the configuration in order to download the files.
    Here's my problem: I check-in and out files while developing a release. A release for me is a set of (checked-in) objects and folders grouped in a checked-in configuration.
    Once I'm finished I want to download the release. In order to do this I base a 'DOWNLOAD' workarea on the configuration that represents the release.
    Then I download the DOWNLOAD workarea; and have all files in the release on my file system. But I do not want them to be READ-ONLY !!, since the folders and files that make up my application have to be moved and copied around a couple of servers using FTP scripts and this is a whole lot more complicated when the files and folders are read-only.
    If this cannot be disabled than Oracle SCM is a lot less attractive for our way of doing SCM... although I have been able to adjust all my Java FTP classes so that write-access is enabled on all file and folders after download and before publishing the application to all servers.
    By the way I use the Java API of Oracle SCM to download the contents of the DOWNLOAD workarea.
    Regards,
    Rinse Veltman
    CIBER Solution Partners

Maybe you are looking for

  • Time Machine stopped backing up but doesn't give warning/error

    Hi, by chance I noticed, that Time Machine doesn't backup up any more, but still doesn't give errors.  In Console I see the following for a now typical run: 21.04.15 09:58:11,493 com.apple.backupd: Starting standard backup 21.04.15 09:58:12,043 com.a

  • In Address Book, any way to Show/Hide groups from view in the "ALL" view?

    I have groups of contacts in Address Book (v4.0.4) that I plan to use in the future, but right now having to wade through and past all of their individual entries when I am scanning my contact list in the "All" group view. Is there any way to Show/Hi

  • 1st gen mini (1GB) will not restore with new updates installed

    hey everyone would appreciate some help on this one cause im stumped. i plug in my ipod and it says it needs restored on ipod itself and on pc. i will try to restore and it will but once window closes it comes up again. i tried running a diagnostic c

  • Jsp locale issues

    Hi All, We have a web applicatoin where are using struts framework, recently we migrated from weblogic 8 to weblogic 10. Application is develped for only english applicatoin. But for some users the dates are getting displayed in different langauage.

  • Import a 10g DMP file into a 9.2.0.5 instance

    Is it possible to import a 10g DMP file into a 9.2.0.5 instance? Thanks in advance for your thoughts.