Duplicate Shared Items in Sidebar

Hi:
Weird one.  Suddenly all the shared items in my sidebar are doubled.  I have Capsule and Capsule (2), NashServer and NashServer (2), etc.  Deleting the finder prefs dosn't fix it.  Flushing caches does nothing.  That is about the extent of my skill set.  Anybody have any further ideas?

Solved this.  It wasn't an Apple problem.  Please disregard.

Similar Messages

  • Can't get Shared items to appear in Sidebar

    Up until yesterday I have been screen sharing two computers on my network. I would go to the "SHARED" category in the Finder Sidebar and select the other computer. Now I can no longer see the "SHARED" category.
    I have gone to the Sharing item in the System Preferences and reset the permissions, etc in the File Sharing and Screen Sharing categories.
    Still no luck!

    You shouldn't need to reset permissions. If it is not showing, try turning off file/screen sharing, and then turn it back on. Permissions are not going to affect the share showing you your sidebar.

  • Finder Sider Side Bar - "Shared" Items Keeps Refreshing

    I have an iMac where the Finder side bar "Shared" Items (other Macs on the LAN and a Time Capsule) keep refreshing. In other words, it disappears for a split second and then reappears every 120 seconds or so. Bonjour refresh, me thinks. This happens on both ethernet and WiFi.
    I have replaced the Time Capsule - at first thinking this was the issue. I have also removed the sidebar.plist file. Both with no success.
    Any thoughts? Search criteria turned up little.
    OS - 10.6.7

    I appreciate you taking the time to look. My 3 macs are now displaying only other Leopard computers. The strange ones have disappeared. One strange listing did look like my airport express base station. I have a 2nd express airport which servers to play my itunes. I, also, have two PCs on a Linksys wireless network and a wireless printer on my airport network.
    Have no idea why the strange additions have disappeared. In an attempt to get them to disappear, I did shut down & restart each computer (Macs & PCs), unplugged and replugged both express airports, turned off then on the printer, and unplugged and replugged the linksys wireless router connected to my DSL modem. My goal was to isolate the equipment causing the strange listings. No luck. Could only modify the strange listing by totally taking down the wireless network at the DSL modem. Then, after returning from working at a client's; everything looked normal. ?????

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Duplicate Payment item

    Hi,
    when posting electronic bank statement in FEBA duplicate payment item generating for same refernce. Can anyone tell root cause.
    Regards
    MRS

    Check the iput file and also check the posting rules assigned to that external transaction in EBS costomization.
    Regards,
    SDNer

  • Photos tells me it is connecting to Library for shared photos. It's been saying this for about ten hours. I can't stop it from searching or get it to do anything else. I can go back to my photos but I can't get Photos to find the shared items.

    Photos tells me it is connecting to Library for shared photos. It's been saying this for about ten hours. I can't stop it from searching or get it to do anything else. I can go back to my photos but I can't get Photos to find the shared items.

    Be sure Safari does not have the Block Pop-Up Windows preference set.
    Where I work now there are several unencrypted VLANs that require authentication, and Safari promptly pops up a window for me to register every time.

  • EBP PO : Unable to duplicate/copy  item,GR_NON_VAL issue

    Hello,
    I am using SRM 5.O .
    In Process PO when I go to create the PO with more than one line item following issue comes :
    When I entered one line item and check its ok when I click on  <b>Duplicate Selected</b> Item or <b>Copy</b> push button than check  following error appears .
    <b><i>Flag 'Automatic Settlement' at item level is different; Change not possible 
    Flag 'Invoice Expected' at item level is different; Change not possible </i></b>
    Thanks ,
    Sachin
    null
    null

    Hello,
    I have debugged whole program and found when there is single line item everything is fine & when i clicked on <b>Duplicate Selected  Item</b> the value of GR_NON_VAL indicator set in first line item and second Items indicator as it is blank .Where single line Item indicator was blank .
    When I am copying the line item than its working OK .
    Due to mismatch in items followinng program raise error message .
    PERFORM downward_inheritance USING     p_hgp_ecom
                                                 p_hgp_icom
                                                 p_guid
                                                 p_object_type
                                                 p_itm_icom
                                                 ls_igp_icom
                                                 p_changed
                                       CHANGING  ls_header.
    Is there any idea why system behaving like this ?
    Thanks,
    Sachin

  • Is it possible to duplicate an item in the project panel with scripts?

    Hey  all, I'm trying to make a script that will duplicate an item in the project panel. I know you can duplicate an item in a comp, but I'd like to duplicate a project Item...app.project.item(2).duplicate();
    Something like that, is it possible with some other coding to do that?
    Thanks

    Dave, I'm trying to duplicate in a script running inside AE.  I guess I could try to do a system command to duplicate, but I'd really like to do it inside AE so I can keep track of the new layer.

  • New error message :duplicate line item

    Hi,
    We have to add a new error message so that when we try to add a duplicate line item in our qty contract
    1.     we should not be allowed
    2.     we should get a error message u201Cduplicate line item not allowedu2019
    How to achieve this
    Thanks
    Arun

    Search this forum for "USEREXIT_SAVE_DOCUMENT" and "USEREXIT_SAVE_PREPARE".

  • How to avoid duplicate BOM Item Numbers?

    Hello,
    is there a way to avoid duplicate BOM Item Numbers (STPO-POSNR) within one BOM?
    For Routings I could avoid duplicate Operation/Activity Numbers with transaction OP46 by setting T412-FLG_CHK = 'X' for Task List Check. Is there an aquivalent for BOMs?
    Regards,
    Helmut Gante

    Hello,
    is there a way to avoid duplicate BOM Item Numbers (STPO-POSNR) within one BOM?
    For Routings I could avoid duplicate Operation/Activity Numbers with transaction OP46 by setting T412-FLG_CHK = 'X' for Task List Check. Is there an aquivalent for BOMs?
    Regards,
    Helmut Gante

  • R1: tcAPIException: Duplicate schedule item for a task that does not allow multiples.

    Hi,
    I'm struggling with the following task:
    I have to assure an account exists for a given resource. I do provision it with the .tcUserOperationsIntf.provisionObject().
    I've created a createUser task to create the account.
    The task code checks if there is already matching account.
    If no account exists, is is created in the disabled state, and the object state of OIM account is set to 'Disabled' by means of task return code mapping.
    If it exists, it is 'linked' to OIM account.
    The problem is if the existing account is enabled, I have to change the OIM account state to 'Enabled' either.
    To implement this (thanks, Kevin Pinski https://forums.oracle.com/thread/2564011 )) I've created an additional task 'Switch Enable' which is triggered by a special task return code. This task always succeeds, and its only side effect is switching the object status to 'Enabled'.
    By I've getting the 'Duplicate schedule item for a task that does not allow multiples' exception constantly:
    This is the stack trace:
    Thor.API.Exceptions.tcAPIException: Duplicate schedule item for a task that does not allow multiples.\
      at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject(tcUserOperationsBean.java:2925)\
      at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject(tcUserOperationsBean.java:2666)\
      at Thor.API.Operations.tcUserOperationsIntfEJB.provisionObjectx(Unknown Source)\
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\
      at java.lang.reflect.Method.invoke(Method.java:601)\
      at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)\
      at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)\
      at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)\
      ...skipped
      at Thor.API.Operations.tcUserOperationsIntfDelegate.provisionObject(Unknown Source)\
      ... skipped
    What did I wrong?
    Regards,
    Vladimir

    Hi Vladimir,
    Please select 'Allow Multiple Instance' checkbox for the process task.
    Thanks,
    Pallavi

  • Duplicate line item in Statement of account

    Dear All,
    Why there is duplicate line item in Statement of account?

    Hi Ajit,
    In your ECC system follow below path and set Availability check rule to "B" (full delivery).
    SPRO --> SD --> Basic Functions --> Availability Check and TOR --> Availability Check --> Availability Check with ATP Logic or Against Planning --> Define Default Settings
    Select your sales area and set Availability check rule to "B" (full delivery).
    This should fulfill your requirement.
    Rgds
    Sourabh

  • What is the functional difference between the /Shared Items and the /Users/Shared folders?

    Can someone shed some light on the difference, intended functionality between the /Shared Items and the /Users/Shared folders?

    do any/all network services use the Shared Items folder?
    Not all services use it. Time Machine Server does, and I'm not sure which others. FTP is not controlled by Server.app.

  • Duplicate calendar items

    all of a sudden (probably after this latest release), i am getting duplicate calendar items in my iphone on certain events. from what i can tell, if i create an event in ical and sync the iphone, the event is put on the phone correctly. if i then edit the event in ical and re-sync, it puts a duplicate event with the new information on the iphone, instead of just updating the event. it's highly annoying and i don't remember it ever being like this. anyone know how to fix this??

    I had this problem months ago on my iPod Touch (roughly when this dormant thread was started, before the 2.0 update, I think). With that update, it went away, but has recently resurfaced, perhaps with the 2.1 update or perhaps since then. I didn't notice the resurfacing of the problem until this morning, but that doesn't necessarily mean anything.
    The symptoms: When I create a calendar event in iCal, it syncs properly to my iPod. When I create a calendar event on the iPod, it does not sync to iCal. When I modify or delete an iCal-created event on my iPod, the event is not modified in iCal, and the original event is copied from iCal to the iPod, so I wind up with duplicate events (or the deleted event reappears).
    This morning, I overwrote all calendar data on my iPod, to no avail. When I first had the problem months ago, I reset the sync history and then reset the whole iPod, also to no avail; I haven't tried those more drastic steps recently.
    Is anyone else still/again having similar problems? Any solutions besides telling Apple and waiting for a fix?

  • Duplicate shared volume on Desktop

    Hello,
    I sometimes have duplicate shared volume on my desktop. When I open
    /Volumes I can see theVolume, theVolume-1, theVolume-2, ...
    the shared volumes are mounted using Applescript and unmounting after a copy is done.
    How can I prevent this ?
    regards,
    Didier.

    Hi jbbcck, and a warm welcome to the forums!
    Try doing a Get Info on the mounted Volumes/Devices on one that does connect, any clues there?
    Have you checked the IPs of the drive & the Tiger Macs?

Maybe you are looking for