Catalog Orphaned Records

Hi All,
We are running SRM 5.5. with CCM 2.0 and have the following issue.
When we run the program \CCM\CLEANUP_MAPPING with no commit, it finds many orphaned records in our Master catalog and Procurement Catalog.
Although these errors do not appear to be causing any problems for our user community I would never the less prefer to remove them.
When we run the program outside of normal working hours with commit on to try and remove the errors
we get the error message
Exception occured
^ Error during catalog update 4400000047; catalog is locked by CATALOG_ENRICHMENT/491035F93F6E67C7E10000000A003214
Has anyone any ideas on how to resolve the orphaned records problem or what to do to remove the lock?
Thanks in advance
Allen Brooks
SRM BPO
Sunderland City Council

Hi Allen,
I would rather advice to open a new message by SAP to solve this issue!
As you can see at the top of the report /CCM/CLEANUP_MAPPING:
Internal Program!!!
To Be Executed by SAP Support Team Only
Due to you can damage your whole catalogs with this report!
However, the problem seems to be, that you upload a new catalog, where the enrichment is still running, or this get stucked. You can check this in the table /CCM/D_CTLG_REQ. When in the column REQ_SENDER you see some entry, then this catalog is locked, which has to be cleaned up by Support!
Regards,
Tamás
SAP

Similar Messages

  • How to get orphan records in Wf_notifications table and how to purge records in WF notifications out table

    Hi Team,
    1) I need Query to get the orphan records in the wf_notification tables to purge them.
    2) Same way I need to purge unwanted records in WF NOTIFICATION OUT table.Can you please let me know the process to get unwanted records in
    WF NOTIFICATION OUT table and the process to purge them.
    Thanks in Advance!
    Thanks,
    Rakeesh

    Hi Rakeesh,
    Please review notes:
    11i-12 Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance [Video] (Doc ID 1369938.1)
    bde_wf_data.sql - Query Workflow Runtime Data That Is Eligible For Purging (Doc ID 165316.1)
    What Tables Does the Workflow Purge Obsolete Data Program (FNDWFPR) Touch? (Doc ID 559996.1)
    Workflow Purge Data Collection Script (Doc ID 750497.1)
    FAQ on Purging Oracle Workflow Data (Doc ID 277124.1)
    Thanks &
    Best Regards,

  • Deleting orphan records.

    I happen to be in a situation where I have orphaned records whenever I do a dimension build. I have tried using a load rule with the remove unspecified option but all the children of the members I want to keep get deleted. As an example.
    Member1
    -Child 1 of member 1
    - Child 1 of child 1
    -Child 2 of member 1
    Member2
    Orphan1
    Orphan2
    If I create a load rule with a data file containing Member1 and Member2 and use the remove unspecified option, Orphans 1 and 2 get deleted as required but all the children of member1 also get deleted. Question is Is there any way I can delete the orphaned records without deleting the children of the members I want to keep? I'm an Essbase newbie so any help will be appreciated.

    In order to use the "Remove Unspecified" option, you need to be sure you have specified all the records you want to keep.
    Can you somehow generate a file with all the children you need to keep as well?
    Robert

  • Orphaned records in VETVG

    Hi Guys/Experts,
    We have some issues in our environment where there were orphaned documents in VETVG.  When the purchase order is viewed in ME23, it couldn't be found but when processed using VL04, it shows up.  I assume they were all garbage.
    I have seen an OSS note 61148 and this talks about a program that could clean VETVG.  The problem with the program is that it checks for EKPV right away.  This is the part of the program that checks EKPV.
    CHECK i_ebeln NE space.
    SELECT * FROM ekpv APPENDING TABLE xekpv WHERE ebeln EQ i_ebeln.
    CHECK sy-subrc EQ 0.
    SORT xekpv BY ebeln ebelp.
    Since the document could not be found in EKPV, it couldn't reach the statement in the program that deletes the records from VETVG.  Here's the part of the program that deletes the actual record from VETVG:
    WRITE: / 'old records of table VETVG'.
    ULINE.
    SELECT * FROM vetvg WHERE vbeln EQ i_ebeln.
      WRITE: / vetvg-vstel,
             vetvg-ledat,
             vetvg-lprio,
             vetvg-route,
             vetvg-spdnr,
             vetvg-wadat,
             vetvg-kunwe,
             vetvg-vbeln,
             vetvg-vkorg,
             vetvg-vtweg,
             vetvg-spart,
             vetvg-auart,
             vetvg-reswk.
      IF db_upd NE space.
        DELETE vetvg.
      ENDIF.
    ENDSELECT.
    Please let me know if you've encountered the same and how did you get rid of the orphaned VETVG records.
    I am looking forward to seeing your responses.  Thanks and have a good day!
    - Carlo
    Edited by: Carlo Carandang on Sep 7, 2010 8:41 PM

    there are many notes regarding VETVG in OSS.  before you execute reports to clean the situation you should try to find the root cause and an OSS note that fixes the root cause.
    It sound really strange to me to have records in VETVG that do not have a corresponding record in EKPV
    EKPV is shipping data of a purchase order, without shipping data usually no  shipping.
    so the situation must be that the purchase order initially was for stock transfers, and then got somehow changed to be not anymore a stock transfer (e.g. changing document type or other fields) which then may have caused the removal from EKPV, but maybe due to program errors the entry in VETVG was not removed.
    I would start checking a purchase order  that is referenced in VETVG. woould lookup change history at header and item level, would check whether it is a PO for a stock transfer or something different, would chekc if the PO item has a shipping tab....
    would then check OSS notes

  • Orphan record owb 10gr2

    Dear, I am urgently needing to solve a problem in version 10gR2 owb, use this version because the OS is zSeries mainframe, so there is only the version 10gR2.
    So my problem is as follows:
    1 - I am using the cube object to find the keys sk's in the dimension tables and bring their record attached to them.
    2 - For keys that exist in the dimension lookup is working perfectly. My problem is for keys that do not exist in dimension. When I make the cargo hub of the object if he does not find the record size in this record is not loaded in the cube.
    Read the version of OWB 11g, I saw that there is a treatment records orphans. I wonder how to do this in version 10gR2, I will be immensely grateful to everyone.
    a hug and look for help
    Grimaldo Oliveira

    What version specificlly are you using?

  • Inconsistency between catalog authoring tool & Search engine

    Hi,
    We are on CCM2.0 and SRM5.0 with catalog deployment in the same client as SRM.
    In our implementation we are using master catalog and product catalog. We have very recently deleted around 65 items from our product catalog which is mapped to mater catalog and hence those 65 items are not found anymore in master catalog in CAT (Catalog authoring tool).
    This master catalog when published, we expect that these 65 products should not be available for end users.
    But end users can still see the deleted items through CSE (Catalog search engine). Thus there is data inconsistency between CAT & CSE.
    Has anyone faced similar issue before if yes can you please share the same with us.
    Edited by: Manoj Surve on Mar 24, 2009 7:05 AM

    Run TCODE SE38 program /CCM/CLEANUP_MAPPING to look for errors/orphaned records in test mode.
    If you want to clear out orphaned records uncheck the Test Mode but:
    Warning: This program can slow down a production system as it clears out the memory buffers for user catalog searches and these have to be built up again by the users doing searches.  This can be  hours to days depending upon the size of your system and the number of users.
    In Dev or UAt no problem.
    Hope this helps?
    Regards
    Allen

  • How to update child record when item from parent record changes

    Hi, I have a master and detail form with two regions on the same page.
    Region one references parent record, one of column in the parent record is also in the child record. And is one to many relation
    between parent record and child record. How can I have the column on the child record updated when the column of the parent record is being modified?
    For exemple; Parent record has two columns : ID and Program.
    Child record has Program, goal# and status. I have two pages established, page 27 and page 28.
    Page 27 list out all programs from parent record, by clicking on edit to a program, it braches to page 28 with program listed as editable field in region one, and mulitple records list in region two from the child record with the same program.
    The problem that I am having is once the program in region one got modified using ApplyMRU, the program in child record did not get updated with the new value and therefore those record become orphan records. I need a way to update all current child records with the new program value.
    Thanks in advance for anyone who can hlep out on this problem.
    Julie
    Edited by: JulieHP on May 24, 2012 4:57 PM

    One Idea is
    If possible create a after update database trigger on the parent table to update the relevant child record.
    Next one
    Create a PL/SQL process on the parent page with process sequence next to ApplyMRU, so when the ApplyMRU is seccessfull it goes not to your process where you can have your update statement

  • Record locks created when user closes the Browser and not the Web Form

    Hi. We sometimes encounter the issue where a user updates a record, locking the record on the table, but then they unexpectedly closes the browser without saving by clicking the X in the upper-right of the browser window. Then when another user comes along and attempts to edit that record they get the message Unable to Reserve Record. The orphaned record lock eventually does seem to clear itself out, but that can often take 15-20 minutes.
    Is there any way to speed this up? Or to pragmatically keep this from occurring? Either on the database side or with some code in a particular application?
    Please let me know your thoughts. Thanks in advance.

    If a user closes the browser window the forms runtime on the application server holding the locks is in most cases still up and running. The FORMS_TIMEOUT controls on how long a forms runtime on the server is up and running without the client applet not sending a heartbeat (See MOS note 549735.1). By default this is 15 minutes which would explain your locks being held 15 minutes.
    You could decrease the FORMS_TIMEOUT in the default.env, so the forms runtimes get cleaned earlier and thus the locks get released earlier.
    Note that if you have blocking client_hostcalls with webutil this might be a problem, as it prevents the forms applet from sending the heartbeat and after the FORMS_TIMEOUT passed while the forms applet is blocked the forms runtime on the server gets closed.
    cheers

  • Orphans

    Hi,
    Foreign key null may result in orphaned records.
    As a null may have no parent. why is this allowed?
    What conditions require this?
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help , or you may vote-up a helpful post

    There are scenarios which would require this kind of setup
    1. Consider case of self referencing relationship as in case of employee and his manager. Typically this would be represented by a single table having fields EmpID and MgrID which will point to EmpID of same table itself via foreign key. So in this case
    you may choose to make foreign key column MgrID nullable type to represent root level employee (CEO) who will not have a manager to report. If you want to enforce NOT NULL here then you might have to define a value to represent absence of manager like -1/0
    which would require a record with id as same value present in table to denote absence of manager condition.
    2. Consider case where an entity may belong to more than one group. take case of School Library. Here we will have Member table which will indicate member of library. The member can be a student, teacher or even a non teaching staff within school. So the
    table will have separate columns to indicate StudentID,StaffID,TeacherID (or if you want you can collapse teacher and other staff info into one main table) and each column will be linked to master table by means of foreign key constraint (so StudentID ->
    Students Table id field,StaffID -> Staff table id etc). So if you take any one record it will have value for only one of fields ie the member is either a student or teacher or non teaching staff. This is a case where you make them nullable.
    So scenarios like above you'll always make FK column as of type NULLable
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Deleting invalid records

    Our company is widely using HFM application since 9 years, we have not done this activity in our
    Live environment. I understand it delete invalid or orphan records stayed in application once metadata
    is changed.
    My questions is does it impact the actula data in any way ?
    By performing this activity many be quarterly will improve performance also ?
    Let me know ur thoughts asap plzzzz.
    Thanks,
    Ankush

    The program deletes invalid records which means the members not in heirarchy(outline) so it wont cause any issues with data associated with existing outline members.
    Thanks
    Amith

  • Need to decommission a Windows 2003 server....

    I have a Windows 2003 DC with all the FSMO roles.  It was the first DC of the domain
    I also have 3 other DCs that are Windows 2008 R2.  
    All of the DCs are global catalogs.  
    DHCP Server is running on the Windows 2003 DC.  
    All of the DCs run DNS Server but a majority of the PCs in the network point to the Windows 2003 DC for DNS resolution.
    1) What do I need to do to get rid of the Windows 2003 DC cleanly and efficiently?  Is there a certain order of steps? 
    2) How should I split the FSMO roles between the remaining 3 Windows 2008 R2 DCs?
    3) I want to split the DHCP between the 3 DCs.  Should I copy the DHCP database from the Windows 2003 DC and import it?  Or should I recreate 3 non-overlapping scopes?

    To migrate the FSMO roles have a look at this guide
    http://support.microsoft.com/kb/324801 which explains all the steps. Once everything's been moved you can demote the server so it's no longer a DC, though personally I'd opt for shutting it down where possible initially, just so it's still there
    if you find something hasn't been moved. Once you're happy everything is still working without it being there then make sure you demote it, otherwise you'll end up with old records hanging around. If you want to make absolutely sure you could have a look at
    http://msmvps.com/blogs/acefekay/archive/2010/10/05/complete-step-by-step-to-remove-an-orphaned-domain-controller.aspx which details the steps required if a server isn't cleanly removed and you end up with orphaned records.
    Not sure about the splitting of the FSMO roles to be honest. I suspect realistically your best off keeping them all on the one server, since it will make management a lot easier (eg you know which one server is more important than the others), and of course
    if you split it then rather than having a 1 in 3 chance if one of the servers died of having to recover that info, you'd now be guaranteed problems regardless of which server died.
    For DHCP there's a guide here for how
    http://blogs.technet.com/b/networking/archive/2008/06/27/steps-to-move-a-dhcp-database-from-a-windows-server-2003-or-2008-to-another-windows-server-2008-machine.aspx for how to move DHCP to another server. In terms of multiple servers, unless you want to
    go for a full on DHCP failover setup, splitting the scope is the best option, since you can't have multiple DHCP servers actively giving out the same IPs. I haven't tried it to be honest, but since the scope it only a part of the DHCP settings, I'd suggest
    you should be able to use the above process, import those settings to each DHCP server, and then once imported edit the DHCP scope on each such that they no longer overlap. That way you ensure that all the other settings remain the same and are completely
    identical.

  • Lightroom 3 - restore a deleted folder from HD?

    Hello!
    When I first began using LR I don't think I understood the navigation correctly, because for one project, back in June, I created a folder and moved favorites (flagged and 5-starred) from other folders into that one. What I didn't realize until 5 minutes ago is that it actually moves those files on your HD as well! The folder in LR has long since been deleted and for a heart-stopping 15 minutes or so I thought all my favorite photos were gone as well.
    What I found, however, was the original project folder with all my favorites on my HD. So that's okay - problem is, now when I try to bring the folders back into LR, it treats them all as new, untreated photos. It doesn't recognize them. It doesn't recognize any adjustments I've made, either. Even in Finder when I open the photos, they're all totally untreated.
    My question: is there a way I can get LR to recognize those photos again? Or am I going to have to go one by one and re-do all my adjustments, keywords, and folder placement?
    Thanks!
    p.s. - I don't use LR's backup because I have an online backup service, but thanks to Murphy's Law, that's also on the fritz...

    mmrempen wrote:
    Thing is, I don't see how deleting the folder in LR (but not on the HD) would totally erase all adjustment information. Doesn't LR store that info somewhere else? Or inside the photo directly? Where did that info go?
    That is logic: LR is a database(=catalog) with records about your images.
    You can tell LR to delete a folder from its catalog, but not from the HD. This is tantamount to telling LR to throw away all information it ever had about these images.
    As you speak of "project folders" you might err by thinking LR should work like e.g. DxO, but it does not.
    LR is a metadata-database, nothing else.
    There is a remote chance that you might get things back, if you had saved LR's catalog settings into xmp of the files (so sidecar files for proprietary raw formats, embedded for TIF, DNG, JPG). Did you ctrl s or have your preferences set to autowrite xmp?
    I am afraid not, since you report that a new import does not reveal your former work - which it would, as import reads xmp if available.
    So tough luck, I'm afraid. If you tell LR to delete from its database, it does so.
    Cornelia

  • Supplier sites open interface question

    Hi all,
    Here is what we have done:
    We have a data migration program that looks at a csv file and then brings the data into staging table.
    Then, it inserts the above data into 3 open interface tables (AP_SUPPLIERS_INT, AP_SUPPLIER_SITES_INT, AP_SUP_SITE_CONTACT_INT)
    We are using the vendor_interface_id to link the data.
    For ex.
    Supplier 'ABC' in AP_SUPPLIERS_INT would have vendor_interface_id => 2345
    Supplier Sites that belong to above supplier will have:
    vendor_interface_id => 2345 vendor_site_code => 'ABC-SITE-1'
    vendor_interface_id => 2345 vendor_site_code => 'ABC-SITE-2'
    When we ran the Request Set  [Supplier Open Interface Request Set(1)] , the program considered all the records and imported Supplier record and 'ABC-SITE-1'.
    It rejected 'ABC-SITE-2' because of a setup issue.
    Now, after we fixed the setup issue, we ran the request set again, it rejected the Supplier record saying "Vendor already exists " => No problems with this.
    But, it doesn't attempt to pick the second site 'ABC-SITE-2' which is now good to be picked, because it doesn't update the vendor_id, it stays as an orphan record.
    Is there any way to work this around (preferably from the application)?
    Thanks
    Vinod

    Hi user567944 ,
    While submitting to the Open interface. the import options parameter should be All or Rejected. Please check this.
    Regards/Prasanth

  • "No such file or directory" errors on Time Machine backup volume

    I remotely mounted the Time Machine backup volume onto another Mac and was looking around it in a Terminal window and discovered what appeared to be a funny problem. If I "cd" into some folders (but not all) and do a "ls -la" command, I get a lot of "No such file or directory" errors for all the subfolders, but all the files look fine. Yet if I go log onto the Mac that has the backup volume mounted as a local volume, these errors never appear for the exact same location. Even more weird is that if I do "ls -a" everything appears normal on both systems (no error messages anyway).
    It appears to be the case that the folders that have the problem are folders that Time Machine has created as "hard links" to other folders which is something that Time Machine does and is only available as of Mac OS X 10.5 and is the secret behind how it avoids using up disk space for files that are the same in the different backups.
    I moved the Time Machine disk to the second Mac and mounted the volume locally onto it (the second Mac that was showing the problems), so that now the volume is local to it instead of a remote mounted volume via AFP and the problem goes away, and if I do a remote mount on the first Mac of the Time Machine volume the problem appears on the first Mac system that was OK - so basically by switching the volume the problem switches also on who shows the "no such file or directory" errors.
    Because of the way the problem occurs, ie only when the volume is remote mounted, it would appear to be an issue with AFP mounted volumes that contain these "hard links" to folders.
    There is utility program written by Amit Singh, the fella who wrote the "Mac OS X Internals" book, called hfsdebug (you can get it from his website if you want - just do a web search for "Mac OS X Internals hfsdebug" if you want to find it ). If you use it to get a little bit more info on what's going on, it shows a lot of details about one of the problematic folders. Here is what things look like on the first Mac (mac1) with the Time Machine locally mounted:
    mac1:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac1:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac1:xxx me$ ls -lai
    total 48
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 .
    282780 drwxr-xr-x 12 me staff 442 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 Documents
    729750 drwx------ 104 me staff 7378 Jan 15 14:17 D2
    728506 drwx------ 19 me staff 850 Jan 14 09:19 D3
    mac1:xxx me$ hfsdebug Documents/ | head
    <Catalog B-Tree node = 12589 (sector 0x18837)>
    path = MyBackups:/yyy/xxx/Documents
    # Catalog File Record
    type = file (alias, directory hard link)
    indirect folder = MyBackups:/.HFS+ Private Directory Data%000d/dir_135
    file ID = 728505
    flags = 0000000000100010
    . File has a thread record in the catalog.
    . File has hardlink chain.
    reserved1 = 0 (first link ID)
    mac1:xxx me$ cd Documents
    mac1:xxx me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac1:Documents me$ ls -lai | head
    total 17720
    135 drwxrwxrwx 91 me staff 3944 Jan 7 02:53 .
    280678 drwxr-xr-x 5 me staff 204 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    146 drwxr-xr-x 2 me staff 68 Feb 17 2009 .parallels-vm-directory
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    148 drwxr-xr-x 2 me staff 136 Aug 28 2009 ACPI
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    So you can see from the first few lines of the "ls -a" command, it shows some file/folders but you can't tell which yet. The next "ls -la" command shows which names are files and folders - that there are some folders (like ACPI) and some files (like A.txt and A9.txt) and all looks normal. And the "hfsdebug" info shows some details of what is really happening in the "Documents" folder, but more about that in a bit.
    And here are what a "ls -a" and "ls -al" look like for the same locations on the second Mac (mac2) where the Time Machine volume is remote mounted:
    mac2:xxx me$ pwd
    /Volumes/MyBackups/yyy/xxx
    mac2:xxx me$ ls -a
    . .DS_Store D2
    .. Documents D3
    mac2:xxx me$ ls -lai
    total 56
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 .
    282780 drwxr-xr-x 13 me staff 398 Jan 17 14:03 ..
    286678 -rw-r--r--@ 1 me staff 21508 Jan 19 10:43 .DS_Store
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 Documents
    729750 drwx------ 217 me staff 7334 Jan 15 14:17 D2
    728506 drwx------ 25 me staff 806 Jan 14 09:19 D3
    mac2:xxx me$ cd Documents
    mac2:Documents me$ ls -a | head
    .DS_Store
    .localized
    .parallels-vm-directory
    .promptCache
    ACPI
    ActivityMonitor2010-12-1710p32.txt
    ActivityMonitor2010-12-179pxx.txt
    mac2:Documents me$ ls -lai | head
    ls: .parallels-vm-directory: No such file or directory
    ls: ACPI: No such file or directory
    ... many more "ls: ddd: No such file or directory" error messages appear - there is a one-to-one
    correspondence between the "ddd" folders and the "no such file or directory" error messages
    total 17912
    728505 drwxrwxrwx 116 me staff 3900 Jan 7 02:53 .
    280678 drwxr-xr-x 6 me staff 264 Jan 20 01:23 ..
    144 -rw-------@ 1 me staff 39940 Jan 15 14:27 .DS_Store
    145 -rw-r--r-- 1 me staff 0 Oct 20 2008 .localized
    147 -rwxr-xr-x 1 me staff 8 Mar 20 2010 .promptCache
    151 -rw-r--r-- 1 me staff 6893 Dec 17 10:36 A.txt
    152 -rw-r--r--@ 1 me staff 7717 Dec 17 10:54 A9.txt
    If you look very close a hint as to what is going on is obvious - the inode for the Documents folder is 152 on the local mounted case (the first set of code above for mac1), and it's 728505 in the remote mounted case for mac2. So it appears that these "hard links" to folders have an extra level of folder that is hidden from you and that AFP fails to take into account, and that is what the "hfsdebug" shows even better as you can clearly see the REAL location of the Documents folder is in something called "/.HFS+ Private Directory Data%000d/dir_135" that is not even visible to the shell. And if you look closely in the remote mac2 case, when I did the "cd Documents" I don't go into the inode 135, but into the inode 728505 (look close at the "." entry for the "ls -la" commands on both mac1 and mac2) which is the REAL problem, but have no idea how to get AFP to follow the extra level of indirection.
    Anyone have any ideas how to fix this so that "ls -l" commands don't generate these "no such file or folder" messages?
    I am guessing that the issue is really something to do with AFP (Apple File Protocol) mounted remote volumes. The TimeMachine example is something that I used as an example that anyone could verify the problem. The real problem for me has nothing to do with Time Machine, but has to do with some hard links to folders that I created on another file system totally separate from the Time Machine volume. They exhibit the same problem as these Time Machine created folders, so am pretty sure the problem has nothing to do with how I created hard links to folders which is not doable normally without writing a super simple little 10 line program using the link() system call - do a "man 2 link" if you are curious how it works.
    I'm well aware of the issues and the conditions when they can and can't be used and the potential hazards. I have an issue in which they are the best way to solve a problem. And after the problem was solved, is when I noticed this issue that appears to be a by-product of using them.
    Do not try these hard links to folders on your own without knowing what they're for and how to use them and not use them. They can cause real problems if not used correctly. So if you decide to try them out and you loose some files or your entire drive, don't say I didn't warn you first.
    Thanks ...
    -Bob

    The problem is Mac to Mac - the volume that I'm having the issue with is not related in any way to Time Machine or to TimeCapsule. The reference to TIme Machine is just to illustrate the problem exists outside of my own personal work with hard links to folders on HFS Extended volumes (case-sensitive in this particular case in case that matters).
    I'm not too excited about the idea of snooping AFP protocol to discover anything that might be learned there.
    The most significant clue that I've seen so far has to do with the inode numbers for the two folders shown in the Terminal window snippets in the original post. The local mounted case uses the inode=728505 of the problematic folder which is in turn linked to the hidden original inode of 135 via the super-secret /.HFS+... folder that you can't see unless using something like the "hfsdebug" program I mentioned.
    The remote mounted case uses the inode=728505 but does not make the additional jump to the inode=135 which is where lower level folders appear to be physically stored.
    Hence the behavior that is seen - the local mounted case is happy and shows what would be expected and the remote mounted case shows only files contained in the problem folder but not lower-level folders or their contents.
    From my little knowledge of how these inode entries really work, I think that they are some sort of linked list chain of values, so that you have to follow the entire chain to get at what you're looking for. If the chain is broken somewhere along the line or not followed correctly, things like this can happen. I think this is a case of things not being followed correctly, as if it were a broken chain problem then the local mounted case would have problems also.
    But the information for this link in the chain is there (from 728505 to the magic-135) but for some reason AFP doesn't make this extra jump.
    Yesterday I heard back from Apple tech support and they have confirmed this problem and say that it is a "implementation limitation" with the AFP client. I think it's a bug, but that will have to be up to Apple to decide now that it's been reported. I just finished reporting this as a bug via the Apple Bug Reporter web site -- it's bug id 8926401 if you want to keep track it.
    Thanks for the insights...
    -Bob

  • Errors, Errors, and More Errors

    I am having massive problems with our Sql Server (v8 sp4)/Websphere (5) application.
    We rolled out a new version of our (previous stable) J2EE application at the beginning of December 2005. Our app uses container-managed transactions, no entity beans.
    We immediately began experiencing blocking processes. KILLing the blocking process almost always resulted in a few widowed and orphaned records.
    I tried tuning the connection/transaction/context. Previously, the app created new connections for almost every function call to every bean. I changed the configuration to share the same connection for all the calls. The problem reduced slightly.
    I have tried reducing the number of queries in each transaction. No help, either.
    Today, I checked the logs and we are getting a crazy collection of database errors:
    - "Statement is closed"
    - "ResultSet is closed"
    - "Invalid parameter binding(s)" (lots of these!)
    - "No valid transaction context present"
    - "Transaction is ended due to timeout"
    - "Transaction has already failed, aborting operation."
    As well as Java exceptions like:
    - "NullPointerException"
    - "SQLException: DSRA9002E ResourceException with error code null"
    This makes no sense whatsoever. We didn't make massive changes to the code. Our changes focused on some new beans, new JSPs, etc.
    Any ideas?
    Russell T
    Atlanta, GA (USA)

    I apologize for explaining badly the connection setup. We are using connection pooling (through Websphere).
    What I intended to say was that we were acquiring connections from the Context willy-nilly throughout the code. I created an abstract bean that every session bean extends in order to eliminate redundant code.
    The abstract session bean gets its connection like this:
    Context envCtx = (Context) new InitialContext();
    String dataSourceName = (String) envCtx.lookup("java:comp/env/dataSourceName");
    DataSource dataSource = (DataSource) envCtx.lookup("jdbc/" + dataSourceName);
    connection = dataSource.getConnection();The Websphere admin has even tried additional tuning of the connection pool settings, such as size, timeout, reap time, etc.

Maybe you are looking for

  • SLD fails to connect to Solution Manager

    I am trying to populate SMSY with information from my SLD.  The SLD appears to be setup properly as I am able to view all of my systems directly within the SLD.  However, when I try to view the information in SMSY, The system cannot find anything. I

  • To check the entire List to get the email address for a Workflow.

    Hi For a Service Request workflow i need to check the entire user list where the department is 'abc ' and then take that persons email address and send the mail to that particular user. Is there any way in which this is possible. Thanks in advance Me

  • Buttons for the slideshow are broken

    Hi, new to iWeb and all the web composing stuff so I may miss something. I made slideshows from my albums on a site i'm currently building but suddenly, the left and right arrows and pause buttons don't appear anymore. I checked the published site on

  • Can I 'UnArchive' Footage ?

    Hi, After archiving an iDVD project which contained several individual iMovies to my back up hard disk I deleted all of the individual files on my internal HD to free some space. Query: Is it possible to extract an individual iMovie file from the arc

  • Add circle to XML MapRequest

    Hi, I'm using Mapviewer XML API to create XML MapRequests. When I was trying to add circle geometry-features to the map request I was running into to following problem: The circle is rendered on the map - but I wasn't able to render it in my own styl