Sorting folders base on extended attribute?

I created a subclass of folder and added custome
attribute 'sort'. I added several folders with the 'sort' value
set. Now I want to retrive these folders sorted by the 'SORT'
attribute. Below is my code, note if I put 'NAME' as my
sortqualifier value the sort works, but using 'SORT' does not
work, it returns 0 results.
How can I specify the sort order which I want my folders to
appear when the order is not based on the Folder class
attributes? Does if not possible with subclassing what about
other options?
Vector liblist = new Vector();
Folder po = null;
PublicObject[] libraries = null;
SortSpecification sort = new SortSpecification();
SortQualifier sq = new SortQualifier("SORT", true); // <<< note
attr.
sort.addSortQualifier(sq);
String rootDir = m_Root + curloc;
po = (Folder)getPublicObject(session, rootDir);
po.setSortSpecification(sort);
libraries = po.getItems();
for (int i = 0; i < libraries.length; i++) {
     liblist.addElement(libraries.getName());

I don't know of any functionality to do that in Bridge now; however, it certainly can be done via scripting.
Perhaps one of our forum contributors might want to take on a "picture sorter" script...
Bob
Adobe Workflow Scripting

Similar Messages

  • Unusual extended attribute on Library folder

    I was looking around in my home folder and noticed that my Library (/Users/xxx/Library) folder has an unusual extended attribute. Have attached an image of the detailed information. Anyone have any idea what this is about? The attribute is 620 bytes long and looks to be some sort of compressed or binary data. The attribute has a weird name "Q3JCEFADN8SD817B1317B7C0707CA" and I've searched on the web for information but no luck.
    Any idea on how to track down when or how or by what program this got added to the Library folder? I've looked in my entire user folder and only this Library folder has this particular extended attribute.
    Thanks for any ideas or suggestions.
    -Bob

    You've gotten misinformation, so you're confused.  Not your fault at all.
    The ROOT LEVEL Library is not hidden in 10.7.5 at all.  It's right there where it has always been, at the Root Level, i.e. at the same level where your Applications, Systems and Users folders have always been.
    That's the Library where your fonts belong, in a sub-folder labeled Fonts.
    What is hidden is the User Library, and it is hidden for a very good reason:  you have no business messing with that Library.  Fonts do not belong there at all.  Apple made it hidden precisely because they don't want you to find it.  And it has nothing to do with any updates, it has been there since Lion 10.7.0 beta, not available to the general public.
    There is of course a way to access it, and no Terminal hokus pokus is involved at at all.  It's right under the GO menu in the Finder > Go to Folder…, you just need to know exactly what path to type there to get to the Library.
    I won't tell you because you really have no business going there in the first place, especially at your level of knowledge of the Mac OS X.
    Adobe help said that with the updates, apple made the library folder invisible to photoshop
    Ah, you just happened to talk to a complete, ignorant idiot, of which there are untold numbers manning the outsourced customer service and tech support call centers in India.
    Even the hidden User Library is visible to all applications.  What you were told is utter nonsense.  Apple wanted to keep clueless users out, not applications.  Good Grief!
    Now tell us what fonts you're talking about specifically, where you put them and how you manage your fonts.
    Hope I will have regained a measure of composure by the time you get back to us with detailed information about your system and your computer maintenance practices.
    Make sure your Photoshop installation is updated to CS5.1 vers. 12.1.

  • Can't repair Macintosh HD in Disk Utility (extended attributes)

    Hi all,
    I have a Retina Macbook Pro with one of them new fancy SSDs in it. I'm running 10.9.3 all up-to-date and am a bit computer-savvy, but I'm more than puzzled by what my Mac has been doing as of late.
    When trying to install Windows via Bootcamp, I ran into an error message in the Partitioning step of the process. I hopped into Disk Utility, figuring I could simply partition from there (something in me trusts Disk Utility more than the Boot Camp Assistant). Partition failed!
    I then executed the "Verify Disk" command in Disk Utility, only to be told I needed to restart from the Recovery Partition and use the "Repair Disk" function of Disk Utility then. Log of this whole event is at the end of this post, but the offending line seems to be
    2014-06-12 20:59:30 -0400: Checking extended attributes file.
    2014-06-12 20:59:30 -0400: (It should be 15783744 instead of 14437995)
    I then restarted and followed these instructions. "Verify Disk" found no errors, nor did "Repair Disk." I rebooted, only to be met with the same problem!
    Is there some sort of lower-level command I can use to repair my disk if I boot in Single User mode or something?
    Thanks,
    Nathan
    2014-06-12 20:58:30 -0400: Disk Utility started.
    2014-06-12 20:58:36 -0400: Verifying partition map for “APPLE SSD SD512E Media”
    2014-06-12 20:58:36 -0400: Starting verification tool:
    2014-06-12 20:58:36 -0400: Checking prerequisites
    2014-06-12 20:58:36 -0400: Checking the partition list
    2014-06-12 20:58:36 -0400: Checking for an EFI system partition
    2014-06-12 20:58:36 -0400: Checking the EFI system partition’s size
    2014-06-12 20:58:36 -0400: Checking the EFI system partition’s file system
    2014-06-12 20:58:36 -0400: Checking all HFS data partition loader spaces
    2014-06-12 20:58:36 -0400: Checking booter partitions
    2014-06-12 20:58:36 -0400: Checking booter partition disk0s3
    2014-06-12 20:58:36 -0400: Checking file system
    2014-06-12 20:58:36 -0400: Checking Journaled HFS Plus volume.
    2014-06-12 20:58:36 -0400: Checking extents overflow file.
    2014-06-12 20:58:36 -0400: Checking catalog file.
    2014-06-12 20:58:36 -0400: Checking multi-linked files.
    2014-06-12 20:58:36 -0400: Checking catalog hierarchy.
    2014-06-12 20:58:36 -0400: Checking extended attributes file.
    2014-06-12 20:58:36 -0400: Checking volume bitmap.
    2014-06-12 20:58:36 -0400: Checking volume information.
    2014-06-12 20:58:36 -0400: The volume Recovery HD appears to be OK.
    2014-06-12 20:58:36 -0400: Checking Core Storage Physical Volume partitions
    2014-06-12 20:58:36 -0400: Checking storage system
    2014-06-12 20:58:36 -0400: Checking volume
    2014-06-12 20:58:36 -0400: disk0s2: Scan for Volume Headers
    2014-06-12 20:58:36 -0400: disk0s2: Scan for Disk Labels
    2014-06-12 20:58:36 -0400: Logical Volume Group D79A3F82-3153-4DCC-A58F-C9B3ADAD2F6E on 1 device
    2014-06-12 20:58:36 -0400: disk0s2: Scan for Metadata Volume
    2014-06-12 20:58:36 -0400: Logical Volume Group has a 16 MB Metadata Volume with double redundancy
    2014-06-12 20:58:36 -0400: Start scanning metadata for a valid checkpoint
    2014-06-12 20:58:36 -0400: Load and verify Segment Headers
    2014-06-12 20:58:36 -0400: Load and verify Checkpoint Payload
    2014-06-12 20:58:36 -0400: Load and verify Transaction Segment
    2014-06-12 20:58:36 -0400: Incorporate 0 newer non-checkpoint transactions
    2014-06-12 20:58:36 -0400: Load and verify Virtual Address Table
    2014-06-12 20:58:36 -0400: Load and verify Segment Usage Table
    2014-06-12 20:58:36 -0400: Load and verify Metadata Superblock
    2014-06-12 20:58:36 -0400: Load and verify Logical Volumes B-Trees
    2014-06-12 20:58:36 -0400: Logical Volume Group contains 1 Logical Volume
    2014-06-12 20:58:36 -0400: Load and verify BDEE2B2F-6770-4707-AE0A-3862965E6414
    2014-06-12 20:58:36 -0400: Load and verify B761A88F-B034-44A9-8F54-F7732861C825
    2014-06-12 20:58:36 -0400: Load and verify Freespace Summary
    2014-06-12 20:58:36 -0400: Load and verify Block Accounting
    2014-06-12 20:58:36 -0400: Load and verify Live Virtual Addresses
    2014-06-12 20:58:36 -0400: Newest transaction commit checkpoint is valid
    2014-06-12 20:58:36 -0400: Load and verify Segment Cleaning
    2014-06-12 20:58:36 -0400: The volume D79A3F82-3153-4DCC-A58F-C9B3ADAD2F6E appears to be OK
    2014-06-12 20:58:36 -0400: The partition map appears to be OK
    2014-06-12 20:58:36 -0400:
    2014-06-12 20:58:36 -0400: Verifying volume “Macintosh HD”
    2014-06-12 20:58:36 -0400: Starting verification tool:
    2014-06-12 20:58:36 -0400: Checking storage system
    2014-06-12 20:58:37 -0400: Checking volume
    2014-06-12 20:58:37 -0400: disk0s2: Scan for Volume Headers
    2014-06-12 20:58:37 -0400: disk0s2: Scan for Disk Labels
    2014-06-12 20:58:37 -0400: Logical Volume Group D79A3F82-3153-4DCC-A58F-C9B3ADAD2F6E on 1 device
    2014-06-12 20:58:37 -0400: disk0s2: Scan for Metadata Volume
    2014-06-12 20:58:37 -0400: Logical Volume Group has a 16 MB Metadata Volume with double redundancy
    2014-06-12 20:58:37 -0400: Start scanning metadata for a valid checkpoint
    2014-06-12 20:58:37 -0400: Load and verify Segment Headers
    2014-06-12 20:58:37 -0400: Load and verify Checkpoint Payload
    2014-06-12 20:58:37 -0400: Load and verify Transaction Segment
    2014-06-12 20:58:37 -0400: Incorporate 0 newer non-checkpoint transactions
    2014-06-12 20:58:37 -0400: Load and verify Virtual Address Table
    2014-06-12 20:58:37 -0400: Load and verify Segment Usage Table
    2014-06-12 20:58:37 -0400: Load and verify Metadata Superblock
    2014-06-12 20:58:37 -0400: Load and verify Logical Volumes B-Trees
    2014-06-12 20:58:37 -0400: Logical Volume Group contains 1 Logical Volume
    2014-06-12 20:58:37 -0400: Load and verify BDEE2B2F-6770-4707-AE0A-3862965E6414
    2014-06-12 20:58:37 -0400: Load and verify B761A88F-B034-44A9-8F54-F7732861C825
    2014-06-12 20:58:37 -0400: Load and verify Freespace Summary
    2014-06-12 20:58:37 -0400: Load and verify Block Accounting
    2014-06-12 20:58:37 -0400: Load and verify Live Virtual Addresses
    2014-06-12 20:58:37 -0400: Newest transaction commit checkpoint is valid
    2014-06-12 20:58:37 -0400: Load and verify Segment Cleaning
    2014-06-12 20:58:37 -0400: The volume D79A3F82-3153-4DCC-A58F-C9B3ADAD2F6E appears to be OK
    2014-06-12 20:58:37 -0400: Checking file system
    2014-06-12 20:58:37 -0400: Performing live verification.
    2014-06-12 20:58:37 -0400: Checking Journaled HFS Plus volume.
    2014-06-12 20:58:37 -0400: Checking extents overflow file.
    2014-06-12 20:58:37 -0400: Checking catalog file.
    2014-06-12 20:58:59 -0400: Checking multi-linked files.
    2014-06-12 20:59:30 -0400: Checking extended attributes file.
    2014-06-12 20:59:30 -0400: (It should be 15783744 instead of 14437995)
    2014-06-12 20:59:30 -0400: Error: This disk needs to be repaired using the Recovery HD. Restart your computer, holding down the Command key and the R key until you see the Apple logo. When the OS X Utilities window appears, choose Disk Utility.
    2014-06-12 20:59:30 -0400:
    2014-06-12 20:59:30 -0400: Disk Utility stopped verifying “Macintosh HD”: This disk needs to be repaired using the Recovery HD. Restart your computer, holding down the Command key and the R key until you see the Apple logo. When the OS X Utilities window appears, choose Disk Utility.
    2014-06-12 20:59:30 -0400:

    I'm having a similar problem with my G4 Desktop.
    I have a 2nd internal hard drive and 2 FW Drives.
    None of them are able to be verified or repaired using DU.
    I get the following message: "Verify volume failed with error Could not unmount disk." And then some message about "make sure that all your files are closed."
    I tried soft-booting by holding the shift-key down just as a trouble-shooting idea and I was able to verify and the drives were working well. However, when I restarted in normal mode I was not able to verify or repair.
    What is my G4's problem? Why does it need to use that secondary drive in order to operate? What do I need to do to fix it?
    Any suggestions?
    G4 Dual 1.42   Mac OS X (10.4.6)   1 Gig Ram / 2 Internal Hard Drives

  • CSI: update_item_instance issues while creating extended attributes

    Hello,
    For our customer's data conversion, installed base instances have already been created, there is only need to create extended attributes.
    Issues I have encountered are:
    - Whenever an error is encountered in the API call, all following records are rejected with the same error (even the same instance ID/primary key/attribute ID) as the first error record.
    - Extended attributes are randomly entered in the wrong fields (as if one field had gone missing and the others had been updated)
    - Date formats have been changed from DD/MM/YYYY to DD-MON-YYYY when consulting from Installed Base (but this happened only once)
    Other information: There are 14 extended attributes to be added for each item instance, I removed some of the attributes to make the code more readable.
    I followed the metalink note that shows the implementation method for this API and only added the recursive elements that we needed for the 14 attributes + functions to retrieve item_id and such.
    The "message()" function replaces is defined as FND_FILE.PUT_LINE(which =>fnd_file.log,buff => p_msg);
    PROCEDURE main (errbuf OUT VARCHAR2
                        ,retcode OUT NUMBER
    IS
         vl_status                BOOLEAN;
         vl_item_id                NUMBER;
         vl_instance_id           NUMBER;
         v_count               NUMBER;
         v_err_rec CSI_DATASTRUCTURES_PUB.TRANSACTION_ERROR_REC;
         v_trx_id number ;
         v_ret varchar2(240);
         v_msg_count number;
         v_msg_data varchar2(2000);
         v_date date;
         t_msg_dummy NUMBER;
         t_output VARCHAR2(2000);
         vl_attribute_id NUMBER;
         v_inst_rec csi_datastructures_pub.instance_rec;
         v_ext_attr csi_datastructures_pub.extend_attrib_values_tbl;
         v_party_tbl csi_datastructures_pub.party_tbl;
         v_account_tbl csi_datastructures_pub.party_account_tbl;
         v_pr_tbl csi_datastructures_pub.pricing_attribs_tbl;
         v_org_as s_tbl csi_datastructures_pub.organization_units_tbl;
         v_asset_as s_tbl csi_datastructures_pub.instance_asset_tbl;
         v_trx csi_datastructures_pub.transaction_rec;
         v_inst_id_lst csi_datastructures_pub.id_tbl;
    BEGIN
         APPS.FND_GLOBAL.Apps_Initialize(FND_PROFILE.value('user_id')
    ,FND_PROFILE.value('resp_id')
    ,FND_PROFILE.value('resp_appl_id'));
         IF cur_iea_attributes%ISOPEN
         THEN
              CLOSE cur_iea_attributes;
         END IF;
         IF get_trx_id
         THEN
              message('Transaction ID '||to_char(gv_trx_id));
              FOR v_cur_iea_attributes IN cur_iea_attributes
              LOOP
                   v_count:=0; -- Initialisation du compteur du ligne. Attention, le nb de lignes dans interface = 14 x nb lignes dans stg
                   IF check_unique_serial(v_cur_iea_attributes.numero_serie)
                   -- Recuperation de l'item ID
                   THEN
                        get_item_id(p_serial_number => v_cur_iea_attributes.numero_serie
                                       ,p_item_id => vl_item_id
                                       ,p_status => vl_status
                        message('Resultat get_item_id: '||to_char(vl_item_id));
                        get_instance_id(p_item_id => vl_item_id
                                            ,p_serial_number => v_cur_iea_attributes.numero_serie
                                            ,p_instance_id => vl_instance_id
                                            ,p_status => vl_status
                        IF vl_status
                        THEN
                             v_trx.TRANSACTION_DATE:=trunc(SYSDATE);
                             v_trx.SOURCE_TRANSACTION_DATE:=trunc(SYSDATE);
                             v_trx.TRANSACTION_TYPE_ID:=gv_trx_id;
                             message('Entrance v_count'||to_char(v_count));
                             v_inst_rec.instance_id:=vl_instance_id;
                             v_inst_rec.object_version_number := 1;
                             IF v_cur_iea_attributes.IAE_GUARANTEE_END_DATE IS NOT NULL
                             THEN
                                  v_count:=v_count+1;
                                  v_ext_attr(v_count).instance_id := vl_instance_id;
                                  message('v_ext_attr(v_count).instance_id: '||to_char(v_ext_attr(v_count).instance_id));
                                  message('vl_instance_id: '||to_char(vl_instance_id));
                                  get_attribute_id(p_attribute_code =>'GUARANTEE_END_DATE'
                                                      ,p_attribute_id => vl_attribute_id);
                                  message('vl_attribute_id: '||to_char(vl_attribute_id));
                                  v_ext_attr(v_count).attribute_id := vl_attribute_id;
                                  v_ext_attr(v_count).attribute_value := v_cur_iea_attributes.IAE_GUARANTEE_END_DATE ;
                                  message('v_cur_iea_attributes.IAE_GUARANTEE_END_DATE: '||v_cur_iea_attributes.IAE_GUARANTEE_END_DATE);
                                  message('');
                             END IF;
                             IF v_cur_iea_attributes.IAE_LAST_CTRL_DATE IS NOT NULL
                             THEN
                                  v_count:=v_count+1;
                                  message('v_count'||to_char(v_count));
                                  -- Les 6 champs obligatoires a remplir pour un update
                                  v_ext_attr(v_count).instance_id :=vl_instance_id;
                                  get_attribute_id(p_attribute_code =>'LAST_CTRL_DATE'
                                                      ,p_attribute_id => vl_attribute_id);
                                  v_ext_attr(v_count).attribute_id := vl_attribute_id;
                                  v_ext_attr(v_count).attribute_value := v_cur_iea_attributes.IAE_LAST_CTRL_DATE ;                    
                             END IF;
                             CSI_ITEM_INSTANCE_PUB.update_item_instance
                                       p_api_version => 1
                                       ,p_commit => fnd_api.g_false
                                       ,p_init_msg_list => fnd_api.g_true
                                       ,p_validation_level => fnd_api.g_valid_level_full
                                       ,p_instance_rec => v_inst_rec
                                       ,p_ext_attrib_values_tbl => v_ext_attr
                                       ,p_party_tbl => v_party_tbl
                                       ,p_account_tbl => v_account_tbl
                                       ,p_pricing_attrib_tbl => v_pr_tbl
                                       ,p_org_assignments_tbl => v_org_ass_tbl
                                       ,p_asset_assignment_tbl => v_asset_ass_tbl
                                       ,p_txn_rec => v_trx
                                       ,x_instance_id_lst => v_inst_id_lst
                                       ,x_return_status => v_ret
                                       ,x_msg_count => v_msg_count
                                       ,x_msg_data => v_msg_data
                                  -- Output the results
                                  if v_msg_count > 0
                                  then
                                       for j in 1 .. v_msg_count
                                       loop
                                            fnd_msg_pub.get ( j , FND_API.G_FALSE , v_msg_data , t_msg_dummy );
                                            t_output := ( 'Msg' || To_Char ( j ) || ': ' || v_msg_data );
                                            message( SubStr ( t_output , 1 , 255 ) );
                                       end loop;
                                  end if;
                        ELSE
                             message('Update not performed for '||v_cur_iea_attributes.numero_serie||' and '||vl_item_id);
                        END IF;
                   ELSE
                        message('Erreur sur '||v_cur_iea_attributes.numero_serie);
                   END IF;
                   commit;
                   message('Commit done for update loop');
              END LOOP;
         ELSE
              message('No Transaction ID - EOP');
         END IF;
    EXCEPTION
         WHEN OTHERS
         THEN
              ROLLBACK;
              message('Error in main program, rolling back');
    END main;
    Thanks!

    Todd
    The issue seems to because this:
    x_instance_rec.inventory_item_id := ib_rec3.inventory_item_id;
    You are selecting the item id from your staging table. But the serial number in the csi_item_instances is associated with the instance is a different item id than ib_rec3.inventory_item_id. Hence this error. You cannot change the item in the IB once the instance is created.
    You have two choices: comment this out or change the inventory_item_id in staging table to match with the csi_item_instances.
    Also a lot depends on what is your serial number uniqueness. If you are using the same serial number for two different items (uniqueness with item_Id) you need to be careful which one you are picking (based on serial number).
    Thanks
    Nagamohan

  • Where to configure extended attributes (for cases and alerts)

    Hi,
    Re. http://docs.oracle.com/cd/E48549_01/index.htm (referred to by the Online Help under Applications > Case Management > Case Management concepts > Extended attribute) please can you advise where one should make the Configuration changes described on the Windows platform.
    This document says "The complete set of Case Management extended attributes that are used on an Oracle Enterprise Data Quality (EDQ) server are configured in a file named flags.xml in the oedq_localhome/casemanagement directory. This file must be modified to add new extended attributes"
    On my Windows installation I do not have that flags.xml file under C:\ProgramData\Oracle\Enterprise Data Quality\oedq_local_home\casemanagement
    I do however have the file under C:\ProgramData\Oracle\Enterprise Data Quality\oedq_home\casemanagement however I am not able to modify it.
    Please could you confirm how to proceed.
    Also, generally what is the usage of the oedq_local_home and oedq_home directories.
    Thanks, Nik

    Hi Nik,
    From section 1.4.2 of the install guide.
    https://docs.oracle.com/middleware/1213/edq/DQINS/planning.htm#DQINS5205
    EDQ Configuration Directories
    EDQ requires two configuration directories, which are separate from the EDQ Home (installation) directory that contains the program files. The configuration directories are:
    The base configuration directory: This directory contains default configuration data. Once EDQ is installed, the files in the base configuration directory must not be altered, renamed, or moved.
    The local configuration directory: This directory contains overrides to the default configuration. EDQ looks for overrides in this directory first, before looking in the base configuration directory. Files in the local configuration directory can be modified to customize or extend EDQ.
    The names and locations of the configuration directories are as follows:
    If you are using Oracle WebLogic Server, the Oracle installation wizard automatically creates and populates the configuration directories in the EDQ domain with the names of oedq.home (base configuration directory) and oedq.local.home (local configuration directory). An example installation path is: WLS_HOME/user_projects/domains/edq_domain/edq/oedq.home
    WLS_HOME/user_projects/domains/edq_domain/edq/oedq.local.home
    If you are using Apache Tomcat, you create the configuration directories manually in any location, with any names, and the configuration utility will populate them. You are prompted to create the directories during the installation instructions.
    Just copy flags.xml from your oedq.home casemanagement folder to your oedq.local.home casemanagement folder and edit the file accordingly.
    thanks,
    Nick

  • Can rsync filter on extended attributes?

    Hi there,
    I've been using rsync for quite a while now to perform various backups on my network, and it's been working without issue.
    What I'm wondering today is if it's possible to create an rsync job that can filter on the colour label value of a folder? Bascially I'd like a job that essentially does
    "IF this folder IS red OR orange THEN rsync it"
    I've google for a while now and while I see that the -E filter preserves extended attributes, and the filter command can filter on file/folder name attributes, I can't seem to figure out a way to filter based on extended attributes.
    Is there a way?
    ...Mike

    This is not going to be pretty.
    bash doesn't look to have a direct path into Finder's sort-of-private extended attribute information, save via some "creativity" involving xargs and xattr.
    There are also paths into the extended attributes via Perl and via [Python xattr calls|http://www.entropy.ch/blog/Developer>.
    Invoking osascript to script Finder via AppleScript can be be a path.
    I'd probably [customize some C code|http://www.mactech.com/articles/mactech/Vol.21/21.05/ACLs/index.html] to fetch the attributes, and call that from within the bash script. (These APIs are documented, if you're comfortable coding in C.)
    The "sneaky" or "clever" approach would be a customized rsync; tweak the file selection logic to do what you want.
    Have a look at [updating rsync 3.0|http://patternbuffer.wordpress.com/2008/01/04/rsync-30-looking-very-promisi ng> and (for testing) the [backup bouncer|http://www.n8gray.org/code/backup-bouncer> tool.
    But none of this is particularly pretty.

  • Outsmarted by extended attributes

    Got my new computer. No clue as to how it happened but in the Finder, /Users/jv doesn't even show up so you can't navigate to it (and consequently, to its Public and Public/Drop Box folders) anymore. In Terminal, /Users/jv directory has an extended attribute "@" appended to the directory "ls -l" output. Found a thread that indicated removexttr command would remove extended attributes. I get "command not found" when I type "removexttr /Users/jv". How do I make /Users/jv behave like all the other /User folders (i.e., be visible and navigable in Finder)?
    Equally concerned about a bunch of stuff that has a "+" appended to it (Desktop folder, Documents folder, etc., etc., even the Public folder). How do I make this crap go away?

    Where did you see this "removexttr" command? I don't think there is such a thing.
    There isn't. When it said 'no such command' I searched through the bowels of the usual suspects, /bin, /sbin, /usr/bin, /usr/sbin, which, naturally, came up empty. Hence, one of the catalysts leading up to my posting here. "removexttr" was mentioned in some post here in, I believe, this forum. Of course, now, despite multiple combinations of search terms, I can't find it. But, it's not important. The poster must have just been writing in a loose vernacular prose.
    Being impatient, and also since the new mac didn't have much installed on it yet, nor were the user accounts customized too much yet, I took the "sledgehammer" problem solving approach and reformatted and reinstalled the OS and recreated the accounts, before having seen all these new posts in this thread. Figures... So, anyways, the problem went away, even though it was via the "sledgehammer" approach. I haven't reinstalled a copy of my .ssh directory with dsa keypair and authorized_keys2 (dsa keypairs and .ssh were mentioned in that nebulous post that I alluded to earlier), so who knows? The problem may rear its ugly head once again if/when I do. These new posts, particularly yours here, may very well indeed come to my rescue.
    Thanks (to all of you) for taking the time to respond.

  • Adding Extended Attributes for Data Exporter

    Hi, I'm having trouble in getting an exported attribute to export within the new Data Exporter feature. In fact, once i alter the export schema to include the additional column (ext att), it won't write to that table at all.
    I am doing the following per the documentation from Sun, hopefully someone can point out the error of my way.
    First I alter the IDM Schema Configuration.xml file to include the additional User extended attribute.
    Next, I alter the model-export.xml file under model name='User' within WS_HOME to include the additional attribute here, my entry is as follows:
    <field name='employeeId'
    type='java.lang.String'
    introduced='8.0'
    max-length='50'
    forensic='User'
    queryable='true'
    exported='true'
    friendlyName='Employee Id'>
    <description>The Peoplesoft ID coming over</description>
    </field>
    Next, I go to the unzipped directory of IDM hence, \exporter and execute: "ant rebuild" and "ant deploy".
    The following takes place, The rebuild process regenerates my create and drop schema configuration scripts for MySQL. I run both scripts and
    my new column with the newly added extended attribute appears in the EXT_USER table. I also issue the proper permission commands on the tables
    I then undeploy my web application from Tomcat and re-deploy the web app.
    I start the server, log into IDM and go to the Data Exporter configuration page. Under the User data type, my extended attribute does not show up. Further, what once worked, the scheduler now does not write to the EXT_USER table and gives the following error when I believe IDM is starting up. I have no doubt that this is a clue as to why it is not working:
    "StartupServlet: Defining properties from web.xml
    Starting: Identity Server...
    ...Finished starting Startup Servlet
    {Model=User, employeeId=[], assignedRoles=[], idmManager=, businessPhone=[], location=[], MemberObjectGroups=[(id=#ID#Top)], lhdis=true, lhlocked=false, controlledObjectGroupsRule=, ACCT_CD=[], lastModDate=Tue Oct 21 16:44:21 PDT 2008, failedPasswordLoginAttemptsCount=0, description=, lastModifier=Configurator, role=[], divisionCode=[], companyMobilePhoneNumber=[], fullname=Anuradha Rao, employeeType=[], CompanyMailingAddress=[], objectClass=[Top, Object, Principal, User], hasCapabilities=false, questionLocked=false, [email protected], subtype=, managerId=[], sponsorId=[], contractorLocation=[], jobCode=[], locked=false, failedQuestionLoginAttemptsCount=0, xmlSize=936, res=[], repoMod=Tue Oct 21 16:44:21 PDT 2008, accountId=, textPagerEmailAddress=[], lastname=Rao, lastAuditorScan=, user_resources=[], creator=Configurator, id=#ID#54D7-:3AE94AA9C11:110E5685:2FC0FC1B3DEDDBF6, title=[], idmManagerNameNotFound=, faxNumber=[], facility=[], dis=0, lastPasswordUpdate=Wed Oct 01 15:31:47 PDT 2008, name=definabl, authType=, effectiveDate=[], createDate=Thu Sep 25 10:56:13 PDT 2008, jobTitle=[], prov=2, accounts=[], ControlledObjectGroups=[], costCenter=[], firstname=Anuradha, correlationKey=, primaryObjectClass=User, departmentName=[], roleInfo=[], accountType=, middleInitial=[], departmentId=[], displayName=, cubeNumber=[], disabled=true, }
    ex: java.lang.reflect.InvocationTargetException"
    java.lang.reflect.InvocationTargetException is thrown which leads me to believe that something is wrong with the javabean that is regenerated under the ant rebuild and the underlying User.hbm.xml file that regenerated as well. I can see from the User.hbm file that the new extended attribute has been added but this is as far as I have gotten. I really don't know where to go from here.
    Any and all help is extremely appreciated.
    Thank you. Dan

    Hi Nik,
    From section 1.4.2 of the install guide.
    https://docs.oracle.com/middleware/1213/edq/DQINS/planning.htm#DQINS5205
    EDQ Configuration Directories
    EDQ requires two configuration directories, which are separate from the EDQ Home (installation) directory that contains the program files. The configuration directories are:
    The base configuration directory: This directory contains default configuration data. Once EDQ is installed, the files in the base configuration directory must not be altered, renamed, or moved.
    The local configuration directory: This directory contains overrides to the default configuration. EDQ looks for overrides in this directory first, before looking in the base configuration directory. Files in the local configuration directory can be modified to customize or extend EDQ.
    The names and locations of the configuration directories are as follows:
    If you are using Oracle WebLogic Server, the Oracle installation wizard automatically creates and populates the configuration directories in the EDQ domain with the names of oedq.home (base configuration directory) and oedq.local.home (local configuration directory). An example installation path is: WLS_HOME/user_projects/domains/edq_domain/edq/oedq.home
    WLS_HOME/user_projects/domains/edq_domain/edq/oedq.local.home
    If you are using Apache Tomcat, you create the configuration directories manually in any location, with any names, and the configuration utility will populate them. You are prompted to create the directories during the installation instructions.
    Just copy flags.xml from your oedq.home casemanagement folder to your oedq.local.home casemanagement folder and edit the file accordingly.
    thanks,
    Nick

  • When I restore my Mac with Time Machine and then want to partition my disk, Disk Utility always says incorrect number of extended attributes

    When I need to restore my Mac from a time machine backup and then partition my hard disk (the disk inside the computer not the time machine disk), Disk Utility always says "Incorrect Number of Extended Attributes". I then boot into Recovery HD and run disk repair. The result? "The volume Macintosh HD appears to be OK". So then I reboot into normal OS and try the partition again, still Incorrect number of extended attributes. I have even tried /sbin/fsck -fy in single-user mode but it still says the volume is ok. I have tried partition my disk in the recovery HD and it still fails.
    Can anyone please help me solve this problem?

    Try something stronger, such as DiskWarrior or TechTool Pro.
    iMac refurb (27-inch Mid 2011), OS X Mavericks (10.9.4), SL & ML, G4 450 MP w/Leopard, 9.2.2

  • Cannot create or replace : The specified extended attribute name was invalid.

    New problem arrived today. Trying to copy a file from 10.6 server with an XP (SP3) client. I get this error:
    Cannot create or replace (file name here): The specified extended attribute name was invalid.
    The contents of the file can be copied, but not the folder. Other files can be copied. There are no funny characters. The name is not too long. I propogated the permissions on the share and that had no affect. The problem exists on three different XP systems. Can't find extended properties that could be causeing a problem. Any ideas?

    Nikon just released a Firmware update today for the D750

  • Querying on the basis of some attribute of composite primary key

    Hello,
    I am looking for suggestion about querying just on the basis of one attribute of the composite primary key.
    TO exemplify,
    I have a 6 attributes example A1,A2,A3,A4,A5,A6.
    A1,A2 and A3 together, serve as composite primary key.Now,because of the need of the project,I want to do querying on the basis of any A1,A2 or A3.One way I could think of is to have secondary indices on each of A1,A2,A3.
    Can some one explain me roughly how to go about it?
    I am a new user of BerkeleyDB Java Edition,hence not sure what would be a good way to do it.I understand one way to do it,would be to keep A1,A2,A3 in the Key class and A1,A2,A3,A4,A5,A6 in the value class as well.Then create secondary indices on the A1,A2 and A3 individually.
    Can someone suggest a more efficient way?
    Thanks,
    Will appreciate any suggestions.
    Prateek

    Exactly as you said. Create secondary indices on each attribute you want to index off of. I don't use the Java interface, but what you pretty much need to do is form your secondary key like so:
    skey: pkey_individual_attribute
    skeysize: pkey_individual_attribute_sizeHow to do this is documented in the BDB Java API docs:
    For example:
        class MyKeyCreator implements SecondaryKeyCreator {
            public boolean createSecondaryKey(SecondaryDatabase secondary,
                                                DatabaseEntry key,
                                                DatabaseEntry data,
                                                DatabaseEntry result)
                throws DatabaseException {
                // DO HERE: Extract the secondary key from the primary key and
                // data, and set the secondary key into the result parameter.
                return true;
        SecondaryConfig secConfig = new SecondaryConfig();
        secConfig.setKeyCreator(new MyKeyCreator());
        // Now pass secConfig to Environment.openSecondaryDatabaseThe extractor function used to construct the secondary index is passed the primary key and primary data - therefore all the data is available to you with no need to duplicate the key within the data itself. While the standard example is to use some part of the primary data to form a secondary key - there's absolutely nothing against using only a part of the primary key to form a secondary key instead. The only thing you have to do is slice up said primary key and construct the "result" parameter to be a single attribute. The backend API already knows which composite key this secondary entry will be associated with and as such will implicitly form the data (or as you called it "value") section of the index (which will be the composite primary key passed to it).
    The primary key/data should consist of the composite A1,A2,A3 with only A4,A5,A6 as data.
    The secondary->get() call (within the Java API) takes a key and provides back the primary key and primary data (basically the same as the db->pget() call in the C API). Since you've already indexed individual attributes, based off of the composite key, into their own respective databases - you just query from one of your secondary indexes with whatever specific attribute as the key. You then use the filled in primary key and primary data to work off of.

  • When I check my boot SSD drive using Disk Utility under Mavericks, I often get "Incorrect number of extended attributes" errors.  But if I boot off an external drive and check the same SSD, no errors are reported.  Is this a bug in Mavericks?

    When I check my boot SSD drive using Disk Utility under Mavericks, I often get "Incorrect number of extended attributes" errors.  But if I boot off an external drive and check the same SSD, no errors are reported. 
    This happens not just with the SSD in my Mac Mini, but with another SSD in my MacBook (both now running Mavericks).  So far as I know, all of the kit I am using is in good order (despite the file corruption reports).  So I am beginning to wonder if it could be due to a bug in Mavericks?  Both SSD drives have been formatted to MacOS Extended (journaled) format.  Should I have used a different format, I wonder?
    Has anyone else encountered this issue?
    Does anyone have a solution?
    Or an explanation that might help my investigation of the issue?
    Thanks guys,

    I understand that the Corsair Force 3 is not one of the SSD drives that are supported on Apple Macs. 
    I did try downloading and using Trim Enabler, but the error message came up both when it was off and when it was on.
    I understand that not everyone thinks Trim Enabler is a good program, though there is a new version out now, so I may give it another try.

  • 648 max , testing base and extended memory

    I dont see the thread i answered to one time that someone spoke of using ddr400 with 648 max or max-L board and they were getting testing base and extended memory error from the LED's.  Well i just got the same error today and my system would not boot i cut the power to the PS. and turned it back on it would start to boot and hang i could barely get to the windows logo sometimes the vid card wouldnt even boot the logo to the screen. This is with an updated mobo that msi sent me that i had just put in , it seemed to run fine at first then this problem started occuring.  The only difference i had in the setup from the previous outdated board was i had the mem in DIMM 3 and not DIMM 1, so i moved the memory back over to DIMM 3 and now the problem is gone. Just something for those of you with the same problem to try.
    Oh and i forgot one more issue, i cannot set the dram speed in the bios manually i have ddr 333 it should be 167 mhz so i set it to 4/5  and the system will not boot at all  i end up having to reset cmos.  I called MSI about that and they told me to just have it set to SPD , I said i know that and im fine with leaving it that way but it should let me set it manually if i wanted to and that i wondered if that had anything to do with the random " testing base and extended memory " problem.

    It was probably myt thread...
    http://www.msi.com.tw/program/e_service/forum/viewindex.php?threadid=6204&boardid=10&styleid=1
    Have done some testing with both DDR333 and DDR400 and my results are in the thread.
    Right no I'm using my DDR400 as DDR333, manually set to 167MHz, using DIMM1, and havn't seen any problems. I'll try the DIMM3 to see if I get any changes.
    And by the way.. I tried setting my DDR400 manually to 200MHz and it wouldn't boot at all. After a lot of ugly language I finally got it back to the BIOS....after a reset of the CMOS.
    Seams like you should use SPD if you want to use the correct freq for your RAM.
    Have you tested the new 1.3 BIOS?

  • Sorting images mislabeled in Report Attributes?

    I added some custom images in the Sorting section of the Report Attributes. When I ran the report the images displayed were opposite of what I expected (i.e. the descending image displayed for an ascending sort and vice versa.) Upon further investigation it appears that the labels in the Sorting section are indeed reversed. The descending image is displayed when the sort order is ascending and vice versa. Is this a bug?

    Erik,
    Thanks, this explains it. By specifying the request as part of your URL you run into the recently uncovered issue. The request you're setting is REMFROMLIST and ADD2LIST. You probably either have links that include those requests or you have branches where you specify them. Either way, in order to get reports sorting to work, you'll have to make sure that the request strings are not part of your URL. This is a work-around and the upcoming HTML DB patch release will solve this issue.
    One way of avoiding this is to have computations on the previous pages that set a napplication level or page level item to the REMFROMLIST and ADD2LIST values and then you can use those items for your conditions that are currently evaluating those strings.
    Hope this helps and sorry for the inconvenience,
    Marc

  • No permission to copy file (to NAS with AFP support) - extended attributes?

    For certain files that I copy from one of my drives to the NAS here (Synology 206j), ever since upgrading to Snow Leopard, Finder's behaviour has changed. It will simply refuse to copy the file, stating that "The operation can’t be completed because you don’t have permission to access some of the items."
    The source is an external USB drive in Mac OS Extended format, I do have full permissions on the files and in any case the drive is set to ignore ownership.
    The target is, like most NASes, in EXT3 format. It's sharing via its own AFP implementation.
    I'd have tried repairing permissions, but for whatever reason, it's greyed out on my external drives. Since then I've figured that it was probably not that anyway, which will become apparent:
    I tried copying the file via the terminal and it actually worked, but also gave an error: "cp: myfile: could not copy extended attributes to /Volumes/media backup/myfile: Operation not permitted"
    This suggests to me that, as expected, the target volume doesn't support extended attributes. But rather than popping up with the message that Leopard did (something along the lines of "your target volume doesn't support extended attributes so your file's metadata may be lost, do you wish to continue") it just aborts instead.
    I'm not sure if this is a Snow Leopard problem or a change that Synology need to catch up with. A workaround is to compress the file to ZIP first.
    Thought I'd mention it here before trying to report it as a bug somewhere. Anyone else having similar issues, either with a NAS or some other storage, in which case please specify?
    Message was edited by: Cloudane

    Thanks guys, yeah I've been monitoring the netatalk mailing list myself: they're onto the same problem, have come to the same conclusions about it being Extended Attributes and have much better proof of such than I
    It looks like they're currently debating the best way to fix it (pretending to support EAs but sending them to a black hole being the quick and dirty way, but I suspect they'll go for implementation)
    It'll be a while for sure, but the Synology guys are very good with Mac support so I'm sure they'll apply the update fairly quickly once the netatalk crew get something public. FWIW, I did detect a certain sense of urgency from the netatalk discussions.
    Although it's essentially answered, I'll leave the question marked as unanswered in case Apple come up with a better one
    Message was edited by: Cloudane

Maybe you are looking for