Problem about Handling of Empty Files in File Adapter

Hello everyone,
NetWeaver 2004s --- XI
In Sender i have a File Adapter.
Now i meet a problem about Handling of Empty Files. When i send empty file, but don't cerate a leer message.
I have seen following text in help document. But in adapter configuration i can not find the correspond parameter.
can you give me some tips?
Thx in advance
best regards
Yaning
SAP Help Document über File Adapter
+Handling of Empty Files
Specify how empty files (length 0 bytes) are to be handled.
○       Do Not Create Message
No XI messages are created from empty files.
The files are processed according to the selected Processing Mode.
For example, if the processing mode is Delete, empty files are deleted in the source directory.
○       Process Empty Files
XI messages are created with an empty main payload.
The files are processed according to the selected Processing Mode.
○       Skip Empty Files
No XI messages are created from empty files.
Empty files are skipped and remain in the source directory.+
Help Docu

hi,
it's available since Sp19 for XI 3.0
and the corresponding SPS fpr XI 7.0
http://help.sap.com/saphelp_nw04/helpdata/en/44/f565854b7341e6e10000000a1553f6/frameset.htm
so probably you need to install the new SP
Regards,
michal
<a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

Similar Messages

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Roblem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Problem in handling complex CSV file

    Hi,
    I am facing a strange problem in handling a complex CSV file.
    The content of the file is as follows. It is getting executed through an XSD and the target is a table.
    First time when the interface is executed ( for e.g session id - 19001) it is getting completed successfully.
    Now when making changes in the CSV file suppose adding ( for e.g.-
    DEPT,D05,Books,ABC Store
    CUST,C01,Sayantan,[email protected]
    CUST,C02,Shubham,[email protected])
    or deleting ( for e.g.-
    CUST,C02,Sarbajit,[email protected])
    and running the interface I am not getting the added D05 in the target table nor my C02 data is getting removed i.e.- the updated data in CSV file is not getting fetched and I am getting the same records as i got when i ran interface of session id 19001.
    I am not getting why it is happening??
    The CSV file used in session id -19001 is:
    DEPT,D01,Retail,World Mart
    CUST,C01,Anindya,[email protected]
    CUST,C02,Rashni,[email protected]
    DEPT,D02,Food,Food Bazar
    CUST,C01,Abhijit,[email protected]
    CUST,C02,Anirban,[email protected]
    CUST,C03,Sharmistha,[email protected]
    DEPT,D03,Water,SW
    CUST,C01,Nirmalya,[email protected]
    DEPT,D04,Clothes,City style
    CUST,C02,Sarbajit,[email protected]
    CUST,C03,Abhishek,[email protected]

    Here's what you can do to handle CSV files using HSQL.
    Say the CSV file contains order data. Each order record contains data about a particular order like order number, product code, quantity, ordered by, date etc. Now lets take a hypothetical requirement which says, the user needs to know how many orders were ordered per product.
    Option 1]  At the basic level, read the order file, parse each line and write the logic to get count of orders per product.
    Option 2] Load this CSV file into a database like MySQL and write database queries to get the orders per product.
    So where does this leave us? Have we run out of options? I am sure we would have tried & used the above two options . But I wanted a different approach. Following questions were lingering in my mind:
    1] Why can’t I write SQL queries against the CSV file itself. After all it’s like any RDBMS table.
    2] Why should I load the CSV file into some database before I query it?
    3] Why can’t I create a database table & attach the CSV file to it?
    Continue Reading Here: Handling CSV Files

  • Problem about .xml file from PPro CS5 to FCP with RED and P2 file.

    I have create a Project in Premiere Pro CS5.
    I import the RED file R3D or P2  file XMF and editing my media.
    From menù File i select Export to "Final Cut Pro XML.."
    I open FCP and Import file XML but the media are not reconnect.
    "Warrnig: Non-critical error were found while processing an XML document.
    Would you like to see a log of these error now?"
    I want see the log file and...
    "<file>: Unable to attach specified media file to new clip."
    Where is the problem?

    Hi Dennis I re-format my MACPRO 8 Core and I installed Final Cut Studio Suite and CS5 Premium (no CS4)
    I install Blackmagick driver Decklink 7.6.3
    If I open the After Effects  and setting preview card blackmagic, I see the preview in external monitor.
    If I open the Premiere Pro and setting preview, I don't see the blackmagic card but the second monitor, DV....etc.
    In Premiere I see the blackmagic preset, but no the preview card.
    I have a second question.
    About Red file I want editing in Premiere Pro and i whant color correct in Apple Color from FCP.
    My problem is: my color program crash when I send file form FCP to Color (random mode)
    The sequence is:
    I import file red in PPro5 -- editing file --- export file xml.
    Close the PPro5.
    Open FCP import xml (no re-link media)
    Save de project in FCP
    Select sequence and send to Color.
    In this moment the Apple Color crash.
    I shutdown the MAC.
    I power-up the MAC
    Open FCP select the project and send the sequence to Color.
    Color see the project but no media.
    I re link the media and I editing in color my media.
    Why Apple Color program crash?
    Sorry for my English
    Many Thanks
    Distinti saluti
    Gianluca Cordioli
    Alchemy Studio'S di Gianluca Cordioli
    Via Pacinotti 24/B
    37135 VERONA
    cell.:+39 3385880683
    [email protected]
    www.alchemystudios.it

  • How to handle empty file using sftp adapter

    Hi,
    Please explain me how to handle empty files in sftp adapter.
    Thanks,
    Enivass

    Hi Enivaas,
                        I don't have the seeburger sftp adapter at hand at the moment, but asfar as I remember, this does not specifically have an empty-file handling option like the standard ftp adapter.
    So to stop emtyp files from being written, guess would need to handle this at the mapping level. For example, check for target creation criteria in the header node in mapping. If the creation criteria is not met, you can throw an error in mapping.
    You may also incorporate this condition in your Receiver determination. In this case, if the condition is not satisfied, no receiver is determined in PI.
    Regards

  • Load excel file value to graph problem(about x-axis value)

    Hello,everyone,I have this problem.
    When I load the excel file which I saved before,the waveform came out normally,but the x-axis of graph waveform was not shown the number I want,it shown the number based on the no.of samples.
    What I want is plot the x-axis value based on the rate(pts/s).For this case,rate is 50prs/s,so after calculation...0.02s per reading.totally is 20 seconds.I tried to use XY-Graph,but didn't solve out.
    Jerry
    LV 7.1
    Attachments:
    delete 0 array.vi ‏14 KB
    test3.txt ‏20 KB
    Create Waveform Graph1.vi ‏275 KB

    is it always 0.02s per reading or did you get this figure from some calculation?? in anyway, you just connect that value (the constant 0.02 or calculated amount) to a Property Node... you link the Property Node to Waveform Graph 2... under property choose X-Scale -> Offset and Multiplier -> Multiplier...
    i'm not sure if this is the best method but hope it helps... i've tried it on your code and it works...
    Best Regards,
    JQ
    LV 8.0 user...

  • Problem about locating file with similar filename "Info_*.dat"

    Dear all,
    I am a big problem in file locating and reading, please help and greatful to reply me with sample code.
    How can I locate and read in files with similar file name ( Info_*.dat ) one-by-one. For example , Info_AAA.dat, Info_BBB.dat, Info_CCC.dat. I want to read in Info_AAA.dat first, and then Info_BBB.dat , and lastly Info_CCC.dat, where AAA, BBB and CCC are random number. These files are stored in the same directory.
    How can I make use of * ? such as Info*.dat . Is that possible to use * with java ? Please show me a sample if possible.
    help Help..Urgent !
    Thanks a lot.
    MRW.

    That's easy with PathPattern class from JRegex:
    import java.util;
    import jregex.util.io.PathPattern;
      Enumeration files=new PathPattern("Info_*.dat").enumerateFiles();
      while(e.hasMoreElements()){
         File f=(File)e.nextElement();
         doWhatever(f);
    ...There are also any-char "?" and any-directory "**" wildcards.
    See http://jregex.sourceforge.net/gstarted.html#filesystem

  • Processing Empty Files with File adapter

    Hi..
    We are working on SP17....But  i couldn't find  the option Handling empty files in Sender File adapter...i checked the Adapter Metadata for File adapter under the SWCV- SAP BASIS 6.40 but was unable to search for the word "empty"....
    Please suggest....
    Regards
    Pravesh

    Hi
    Possibly only from SP 19 in XI and SP 10 in PI.
    you have that control. refer this thread
    File Content Conversion Problem of not generating empty file

  • How do I empty all the files in the "Downloads" folder ?

    How do I empty all the files in the "Downloads" folder ? Can they all be deleted at once?

    PS: when you look into the menu bar commands by mouse, you may note
    there are keyboard shortcuts to perform the same tasks; such as they are.
    *Edit> Select all* = uses 'Command + A' keys to do this, without mouse/menu.
    And, it can be done quickly by keyboard. +(this will select all, for the next step)+
    You would have to select the folder to be edited; yet there also are other key
    board commands to select items, folders, applications, etc all without a mouse.
    Once you choose items in a folder or other location, you can then use a
    keyboard command to Move them to the Trash; from there, Empty Trash
    with a three-key command; all without use of the mouse.
    The keys to *Move to Trash* looks like it is 'Command + Delete'. Some key-
    boards have two marked delete keys, so you may have to try both. The
    +small delete key+ has an icon with right-facing x in a box.
    The three keys, to *Empty Trash* via keyboard instead of menu bar/mouse
    appear to be the combination of 'up-arrow, command, delete.'
    Understanding some of the iconography of symbols used to indicate function,
    or what keys to use, as indicated by icons instead of words, is not too hard.
    There are articles and web pages which show various keyboard shortcuts.
    Here's an example, from one of several I've bookmarked over a few years.
    This one has images and shows what the key shortcut symbols mean.
    • Mastering Keyboard Shortcuts: (from myfirstmac site)
    http://www.myfirstmac.com/index.php/mac/articles/mastering-keyboard-shortcuts
    • Mac OS X keyboard shortcuts:
    http://support.apple.com/kb/HT1343
    However, if you have anything you may want to keep before mass deleting, be
    sure to move it into another folder. +re: The docked folders in Leopard+ 10.5 have
    one .PDF in each, about each of the purposed folders. I haven't deleted mine yet.
    {And I've added another folder into the dock near the Trash for my own alias items
    and the original is on the hard disk drive (one user) or could be put into an account
    folder so it would not have permissions or privilege issues with a second account.}
    I'd thought about those 'keyboard symbols' after the first posting; but,
    had to take an elder parent on several errands that took quite awhile.
    PS: there probably are ways to use AppleScript and Automator to handle folder contents.
    Good luck & happy computing!

  • How to create empty text file on target system for empty source text file

    Hi All,
    I have an issue in handling empty file in the Text (FCC) to Text (FCC) file scenario. Interface picks text file and delivers on target system as text file. I have used FCC in both sender and receiver CCs.
    Interface is working fine if the source file is not empty. If the source text file is empty (zero Bytes), interface has to delivery an empty text file on target system.  I have setup empty file handling options correctly on both CCs.
    But when I tried with an empty file I am getting the error message 'Parsing an empty source. Root element expected!'.
    Could you please suggest me what I need to do to create an empty text file on target system from empty source text file?
    Thanks in Advance....
    Regards
    Sreeni

    >
    Sreenivasulu Reddy jonnavarapu wrote:
    > Hi All,
    >
    > I have an issue in handling empty file in the Text (FCC) to Text (FCC) file scenario. Interface picks text file and delivers on target system as text file. I have used FCC in both sender and receiver CCs.
    > Interface is working fine if the source file is not empty. If the source text file is empty (zero Bytes), interface has to delivery an empty text file on target system.  I have setup empty file handling options correctly on both CCs.
    >
    > But when I tried with an empty file I am getting the error message 'Parsing an empty source. Root element expected!'.
    >
    > Could you please suggest me what I need to do to create an empty text file on target system from empty source text file?
    >
    > Thanks in Advance....
    >
    > Regards
    > Sreeni
    the problem is that when there is an empty file there is no XML for parsing available. Hence in case you are using a mapping it will fail.
    What ideally you should do is to have a module that will check if the file is empty and if so write out an XML as you want with no values in the content/fields.
    Or the next choice would be to have a java mapping to handle this requirement. I guess that on an empty file the java mapping will go to an exception which you can handle to write out your logic/processing

  • Mac to Mac file sharing - file permissions problem upon newly created files

    Hi all,
    At home I have two iMac's both running Snow Leopard.
    iMac A/Users/*1 is sharing his iMac A/Users/*1/Documents folder.
    The sharing & permissions - preferences of this folder and the files inside this folder are:
    '*1' : (the iMac A's main user account) : Read & Write
    'share' : (a Sharing Only account, set up from iMac A) : Read & Write
    everyone : : No Access
    iMac B/Users/*2 is able to read & write to this folder because he is connected with iMac A as 'share'.
    So far so good. iMac A/Users/*1 can read & write to his own Documents folder
    and so can iMac B/Users/*2 because he is connected to iMac A as 'share'.
    But: this only applies to the folders and files that are _already in_ the iMac A/Users/*1/Documents folder,
    because I once changed the sharing & permissions -permissions of iMac A/Users/*1/Documents as mentioned above and executed 'Apply to enclosed Items...'.
    So the problem is: when I create a new file or folder on iMac A/Users/*1 or on
    iMac B/Users/*2 and put it in iMac A/Users/*1/Documents, this new file or folder will only
    have the standard sharing & permissions - preferences of any newly created file or folder
    on the iMac X/Users/*Y it was created. The problem is those sharing & permissions - preferences
    will not let the other iMac X"/Users/*Y" Read & Write to this new file or folder but only let himself
    Read & Write. Following, the other iMac X"/Users/*Y" will only be able to Read the newly created
    file or folder, because the standard standard sharing & permissions - preferences on
    iMac A/Users/*1 as well as on iMac B/Users/*2 are the following:
    '*Y' : (the iMac X's main user account) : Read & Write
    staff : (Administrator accounts on iMac X) : Read only
    everyone : : Read only
    A pretty manual fix to this problem is walking to iMacX" and changing the
    sharing & permissions - preferences of the newly created file or folder so that iMacX"
    will also be able to Read & Write this newly created file or folder.
    Does anyone know a better fix for my problem? Basicly I want to have Read & Write
    sharing & permissions on the newly created file or folder for both iMac's,
    not only the for the file or folder creator's iMac. And this without having to change
    sharing & permissions - preferences of this newly created file or folder manually
    each time.
    Thanks in advance,
    Vincent Verheyen.

    Hi all,
    I have found a thread wich i think also handles about my problem.
    The standard sharing & permissions - preferences or privileges
    can be changed apparently, this seems to have got something to do
    with umask and changing this umask. The adress is the following:
    *http://www.macosxhints.com/article.php?story=20031211073631814*
    Other ways to change the above-mentioned umask may be
    applications like SharePoints, TinkerTool or BatChMod.
    All of them are also mentioned in the above-mentioned thread.
    I haven't executed the solution as mentioned in the above-mentioned
    thread yet, replies to the thread speak about insecurity and failures.
    Further more I haven't found any other solutions to my problem,
    so any help is still greatly appreciated.
    Thanks in advance,
    Vincent Verheyen.

  • How do you delete [empty] a single file or folder from "Trash"?

    How do you delete [empty] a single file or folder from "Trash"?
    Can you "securely delete" just 1 item?

    A dangerous and incorrect shell command has been posted in this thread.
    NEVER empty the Trash in the shell (Terminal.) NEVER put anything in the Trash unless you intend to delete it immediately. If you do put something in the Trash and change your mind about deleting it, move it out or use the Put Back contextual menu item. Then empty the remaining items in the Trash as usual.

  • Problem with DMGs and error: "No Mountable File Systems"

    Problem with DMGs and error: "No Mountable File Systems"
    The files are not corrupt. The problem is occurring with all DMGs that are apparently formatted in MS-DOS FAT16. No the file will not mount with Disk utility or any other disk mounter programs I have found.
    This is now the second time this occurred and now effects my MBP and my iMac. First time i spent days with Apple support and the only solution was ultimately back up the data, reformat the HD, start over from scratch and reload everything. That lasted about a month before the problem resurfaced and is now an issue on both iMac and MBP.
    I tried to identify all the programs I installed immediately before the error, as I am convinced it is the result of a software conflict.
    Recent programs includes:
    1) upgrading from Parallels 5.5 to 6.0 on both machines.
    2) using an HP secure II usb drive and setting up a secure disk.
    3) installing new itunes 10
    4) new update to Flip For Mac.
    The files affected are downloaded dmgs, including personal brain and google earth, both which are formatted in FAT16.
    Any help or thoughts? Apple has now spent hours trying and they say i now have to reformat and wipe and start over. That is unacceptable and based on pasted experience the problem is likely to repeat itself. having to wipe and rebuild a HD ever month is not an solution. i need to fid the root problem.
    In the meantime, anyone got a real solution on how to extract the data for a DMG using a different method?
    Message was edited by: remaia

    Where you able to find the solution, i am having the same problem, all was fine till i install some programs only same one i saw did we both did was flip4mac i uninstalled it but the problem is still there, i also restored and erased the hardrive but im not up to doing that all over again. If you found anything out let me know i would greatly appreciate it

  • ITunes cannot sync to the AppleTV because of a problem on your computer.  The required file cannot be found.

    I have two first Gen Apple TV's and a new MAC Book Pro.  I have about 600GB of music and 40GB of photos located on an external hard drive (Western Digital).  In the past, I have not run into any issues when syncing music or photos. Recently, I get an error message "iTunes cannot sync to the AppleTV because of a problem you your computer.  The required file cannot be found."  I noticed this after I added a few folders to my photo folder, which the AppleTV is pointed to for accessing and syncing my photos.  I went to the AppleTV to look to see if the new folders were in the photos, but they were not listed and obviously not syncing from the hard drive to the AppleTV.
    For trouble shooting, I disconnected my upstairs AppleTV from iTunes, then reconnected, pointed the AppleTV to the applicable music and phots folders on my external hard drive.  The music synced fine, but the photos did not sync.  I got the same error message. 
    Does anyone have any advice to what the issue may be??
    I read some posts similar to mine and the referenced deleting the iPhoto file cache.  I did not try this as I do not use iPhoto as the source for my photos syncing to my AppleTV's.
    Thanks - Scott1508

    Looks like my WD Hard Drive may have an issue.  I pointed the Apple TV to the photos folder on my other WD Hard Drive, which is formatted for MAC and it synced the photos fine.  I had never had an issue with the first WD Hard Drive, but it was never formatted for MAC. 
    Not sure if this could be the issue???
    I am considering getting an Apple Time Capsule (2 TB) for my back up, but also need a second back up for all my music and photos. 
    Does anyone have advice on moving to the Time Capsule as my main back up and keeping a WD as the secondary back up??

Maybe you are looking for

  • USB 3 problem with OEL 6.3

    Hi, after connecting an external 2.5" HDD (USB 3 ready!!) to one of my USB 3 interfaces I recognized a lot of xhci_hcd warnings. They all look the same. xhci_hcd 0000:0e:00.0: WARN: Stalled endpoint. I tried to connect while the OS was running. Accor

  • Passing select-options value in method

    How to pass select-options value in method ? Example: Select-options: carrid for spfli-carrid. class cl_myclass implementation. select  carrid connid from spfli where carrid in carrid. endclass. Thanks

  • Cannot cast beans.dataView.Presentation to beans.graph.ThinGraph

    Hi, I get following error when try to cast Presentation to ThinGraph. Cannot cast oracle.dss.thin.beans.dataView.Presentation to oracle.dss.thin.beans.graph.ThinGraph These are the codes which I get errors on them: <% (ThinCrosstab)simpleCrosstab).ge

  • How to make Matlab work after loading Yosemite ?

    How to make Matlab work after leading Yosemite on a Macbook pro ?

  • Allocation of Profit Center actual postings

    Hi all, Have a question regarding on how to allocate the actual postings done to a profit center. Are there any other methods aside from the assessment (3KE5) and distribution (4KE5). Also, i have found a sap note that allows the creation of cycles (