Shell Creation does also copie : Job's; Work flow ...

Hi all
We will do a shell creation to manage our Development system ( like kill all old rubbish ).
my question we will do a shell creation from our Productive system
Shell Creation does also copies follows:
JOB's -> if no we have 400 Jobs how could i copies these to our "new" System
Work flow / Plan version ->if no what could i do?
Code Inspector ->if no what could i do?
Variants -> if no i could transport wit the report for export Variant ( OSS )
ABAP    Version i know its not transported we have to export but know SAP if we import the Version file that some ABAPs miss ( because we don't want )
Thanks in Advance
Lou

> Shell Creation does also copies follows:
> JOB's -> if no we have 400 Jobs how could i copies these to our "new" System
> Work flow / Plan version ->if no what could i do?
> Code Inspector ->if no what could i do?
> Variants -> if no i could transport wit the report for export Variant ( OSS )
All this will be part of the shell system. It's basically your production system without any transactional data.
> ABAP    Version i know its not transported we have to export but know SAP if we import the Version file that some ABAPs miss ( because we don't want )
How should "SAP know" that?
Markus

Similar Messages

  • Automatic creation of Cost Object owner in work flow

    Hello Experts,
    Business requirement background is
    SREQ Contact person just enter cost object and submit for approval.In all cost object cases he can save the service order and submit. After SREQ FRA reviewer approval, before workflow notification mail is sent to cost object owner, system will check if the user exist in X92 system or not. If user is not exist, based on GDDB id, automatically the user will get created in X92.
    The folw of Service Order is as follows:-
    Service order creator----> Service Provider (SPROV) FRA reviewer -
    > Service Requestor (SREQ) Contact Person -
    > Service Requestor (SREQ) FRA person -
    > Service type owner/Cost Object Owner
    And The solution will be as follows:
    1). Once the SREQ FRA reviewer approves the SO, then the notification mail be send to the Cost Object Owner
    2). Here system should check GDDB ID of the Cost Object Owner, is it exist in X92 system or not.
    3). If GDDB ID is not there in X92 system, then on the basis of the GDDB ID, the system should automatically create the SAP user ID for the CO Owner and an email is to be send to the CO Owner for the approval of the SO along with the user id details.
    4). If GDDB ID exists in X92, the SO will be submitted for further approvals.
    How to Customize the steps 2 & 3 that i mentioned in the solution?
    Any help form experts will be appreciated and rewarded
    Thanks in advance....
    Satya

    Hello Ajay,
    Thank you for the response.
    But my requirement is not to populate cost objest.
    It is the approval flow. The reviewer person will check and add the cost objects to the service order and send that Service Order for the final approval i.e.. Cost Object Owner.
    So here my requirement is the System has to create that Cost Object Owners Automaticlly
    How can i achieve this?
    Regards,
    Satya

  • My on/off button does not appear to be working on my iphone 3.....is there any way around this....anyway around this?...the phone also periodically just switches off.

    my on/off button does not appear to be working on my iphone 3.....is there any way around this....anyway around this?...the phone also periodically just switches off.

    Likely a hardware issue... no way around that except to get it repaired or don't use it (the button).
    Try restoring to solve the switching off problem.

  • Does back ground jobs work process go to private mode or not

    Hi Experts,
    Can any one tell me
    Does back ground jobs work process go to private mode or not
    Thanks and Regards
    Dan !!

    Hi Dan,
    I do not think background work processes would go into PRIV mode.
    the orginal information was remove by Admin because provided by Cut and Paste from SAP Online Help.
    you will find it <a href="http://help.sap.com/saphelp_47x200/helpdata/en/7a/caa6f3bfdb11d188b30000e83539c3/frameset.htm">here</a>
    The distribution, participation in distributing or otherwise sending of this material is against the law. The material you are requesting is copyrighted material and available ONLY to customers of SAP. If you need such documents from the SAP Service Marketplace then you must have "S" user ID and login yourself to retrieve this material. If you do not have a "S" user ID then you should contact your own internal company groups responsible and request one or request them to retrieve the document for you.
    By participating and sending such documents you are at risk for legal action and a removal of your account here on SDN and BPX.
    DO NOT send material via email such as this! Further actions will result in officials within your company and your SAP Sales Account being notified and could result in legal action against you as an individual.

  • I have changed my account to SIM only and when I put my new SIM Card into my iPhone 4 it just says 'no service' in the top left corner, also iMessage has stopped working too. I can only use my phone on my WiFi at the moment, does anyone have any idea?

    I have changed my account to SIM only and when I put my new SIM Card into my iPhone 4 it just says 'no service' in the top left corner, also iMessage has stopped working too. I can only use my phone on my WiFi at the moment, does anyone have any idea?

    I already have, I contacted Orange yesterday and they said I will receive a text to confirm the registrastion for my new SIM Card, they said the text would arrive between 2 and 24 hours and its been over 24 hours with no text recieved

  • TDMS 3.0 for ERP 6.0:  step by step procedure for  TDSH6 Shell creation

    Dear All,
    I need to perform shell creation. I have never done this and need help with step by step instructions.
    I need to refresh my my development system with a TDMS shell creation from production client in production system.
    I am familiar with TDTIM and performed many TDTIM copies. I have gone throught the master and operational guides for TDMS also.
    I would try to list down my understanding so far after reading the guides.
    Please correct me if I am wrong ang also suggest correct steps.
    1. All my three systems (central, sender and receiver) are ready from installation point of with with authorizations and latest SPs.
    2. I log into Central/Control system and start a new package  "ERP Shell creation package for AP Release higher 4.6 "
    3. AS I have never executed this option I do not know what are the stps inside it.
       But I assume I should be asked for the source client details from which shell needs to be creates. (000 or pruduction client)
       What type of shell  : slave or master?
    4. During the shell creation from sender system client, is there a downtime for sender system.
    5. Once the shell creation is done an export dump will created.
    6. I perform all the preprocessing steps of a homogenous system copy on the receiver systems.
    7. Stop receiver system, start sapinst  database instance installation with homogenous system copy with R3load option.
    8. Provide the export dump location to sapinst.
    9. After database instance is installed, perform system copy specific post processing steps.
    10. Afte the system is ready it has only one client 000 now available in development system and  i need to create a new cleint from 000 which will act as my new development cleint.
    I request all you experts to please provide your comments at the earliest as activity would be starting from 5th of this month.
    Regards,
    Prateek.

    Hi,
    Let me answer your questions.
    Some queries
    1. Is there a downtime for Production system (sender system).
       SAP recommends that nothing much should be going on during the export of the Sender(PRoduction) system, but so far i have never encountered an issue doing the export in online mode.
    2. Shell creation works on both Production and 000 client or only Production client,
       Be carefull!!! - A Shellcreation is a normal Homogenous Systemcopy using R3load. You will create a 'new' system with the DDIC and clientindependent customization of the Sender system. ALL former data of the target system will be deleted.
    You mention that you will refresh a development system that way. Please be aware of the implications that a development refresh contains. That said. After the refresh you will have the 000 client and parts (users ... authorizations of sonder) of the other clients on the new system.
    3. when I import the the dump using sapinst, how many clients are available in receiver systems.
    See above. All client configuration will be available, but nearly no data will be in these clients except 000. A manual part of the Shellcreation is to clean up the target system (delete the clients that are not needed) This is a fast process.
    4. Soem where in the guide it is mentioned that downtime depends on the type of shell I create.
    Kindly suggest should I create a master shell or slave.
    Hm. The master shell is just a concept that you create a complete new system and put that system into your transport landscape without filling it with data. You can then use that system as the source for a normal (database dependent) systemcopy without impacting your prod system. The result is a very small copy that can be repeated very often without an impact to prod/QA...
    Downtime is as described in point1 not required
    5. Once I execute this activity of shell creation for lets say receiver system A, then can I use the same export dump for second receiver system B.
    Yes
    6. Lastly please suggest the activity execution time as customer has provided us 2 days to execute this activity.
    This depends on your knowledge of homogenuous systemcopies and the preparation + some items like size of stxl  and other tables that will be exported/imported.  If you are very familiar with the systemcopy process using R3Load 2 days are feasable.
    I have seen shellcreations done within 1-2 days but also much longer ones!
    As a prerequisite i would always recommedn to have the latest R3* + migmon exe available on sender + target.
    I hope i have clarified some of your items.
    Best Regards
    Joerg

  • TDMS for BI - shell creation and SMIGR_CREATE_DDL

    If a homogeneous system copy of a BI system is done, it's necessary to run the job SMIGR_CREATE_DDL because depending on what database is used, special SQL scripts for tables must be used (concerning primary keys).
    How does that integrate into a shell creation? Is the shell creation process aware of those scripts? I mean, it's theoretically possible that the SQL scripts has a table that is excluded from being transferred?
    Markus

    Markus and Pankaj:
    I have created a SHELL (BW) using Markus' way as listed below.
    Because Markus did not mention that SMIGR_CREATE_DDL should be run at the beginning, so I did not run it.
    So how to determine in the created shell what is missing?
    We do have partitioned tables and bitmap indexes.
    Thanks!
    Here is the way provided by Markus:
    I basically do shell creation as follows:
    - install the latest system copy tools (R3load, R3ldctl, R3szchk, libdb<database>slib.dll)
    - create an installation directory and give full permissions to <sid>adm and SAPService<SID>
    - create a directory for export location and give full permissions to <sid>adm and SAPService<SID>
    - open a cmd.exe as <SID>adm, step to the installation directory and execute "R3ldctl -l R3ldctl.log -p ." (note the "dot" which means actual directory)
    - in parallel start the client export of client 000 in source system using SCC8 (profile SAP_ALL) and note the file names you get in the last dialog window
    - when R3ldctl is finished give permissions to SAPService<SID> for all files in the installation directory
    - proceed with TDMS
    - when you are at the point to start the system copy start sapinst, choose system copy and select start migration monitor manually (VERY important!)
    - sapinst will run R3ldctl, R3szchk and then prompts you to start migration monitor
    - step to your normal installation directory (c:\program files\sapinst_instdir....), open export_monitor_cmd.properties and adapt the file. The import thing is, that you need to point to YOUR DDL<DB>.TPL file you create in step 4 (in my list here)
    - start export_monitor.cmd and export the system
    - proceed with TDMS to adapt the database sizes (DBSIZE.XML)
    Import:
    - if you have an already installed system uninstall it
    - start sapinst normally, choose system copy and point to the export you created
    - install the system normally (as a new installation)
    - if you want to make sure the import works as the export choose "start migration monitor manually"
    - if sapinst stops and requests you to start migration monitor copy the kernel from source system to target system
    - configure import_monitor_cmd.properties and start migmon
    - logon in client 000, start transaction STMS and create a domain (basically only a transport profile will be created)
    - start program BTCTRNS1 (this will suspend all jobs from the source system but not delete them)
    - copy the files from the client copy to TRANSDIR/cofiles and TRANSDIR/data and import them to client 000 (either use STMS or use command line tp)
    - adapt profile parameters (RZ10)
    - run SGEN
    - invalidate connections in SM59 that point to other production systems
    - finished
    - to re-enable all the jobs run BTCTRNS2, however, I'd do this only if you're sure you have invalidated RFCs and or sending jobs (e. g. from SCOT)

  • Shell creation on iSeries

    Hi,
    Does anyone have experience with shell creation on iSeries?
    With program R3LDCTLDB4 we've created the STR & TPL files, they have been adjusted via TDMS, but now we somehow have to start a system copy based on these files.
    When starting SAPinst, it makes proparations to copy the whole system.
    Does anyone have an idea how to tell SAPinst to copy the system based on already generated STR & TPL files (on iSeries)
    Kind regards,
    Nicolas De Corte

    Start SAPINST with parameter SAPINST_CWD = Path of directory where edited TPL files are place.
    for ex. your installation directory is d:\sapinst (in this directory your tpl files are placed) then start sapinst like  -
    SAPINST SAPINST_CWD = "D:\SAPINST".
    CWD stands for current working directory. Also it is advised to set environment parameter SAP_DIR and TMP to D:\SAPINST
    I hope it helps.
    Best regards,
    Pankaj.

  • Restrict creation of inspection lot for certain work centers

    Hi
    My client needs to restrict the creation of inspection lot at certain work centers although QM is active for all material processed in these work centers.
    I am using the  function module 'EXIT_SAPLQAAT_002' of QAAT0001 for this purpose. Z04 inspection type ( with Post to insp stock) is maintained for all these material.
    A different Z table is there , where material-work center combination is maintained for which no inspection lot is required.
    Logic provided so that system searches Material-Work center from production order then compares it with the Material-Work center combination with the maintained Z Table.If both found same then no inspection lot will be generated.
    But an error message (Change the inspection stock of material XXXXXX in QM only) appears for all those material while doing confirmation in CO11N . Hence system does not allow to complete confirmation for those material where QM is active.
    Please suggest if i am using the correct Exit or any further improvement is required.
    Thanks and regards
    D Mohanty

    Hi
    Actually the issue is resolved now.The problem was with coding where we were using Table-AFRU for reading prod order details.But AFRU does not consider fresh orders for which it was replaced with AFKO.
    Also the Z table must be maintained as Plant-Material-Work center combination prior to go for any order confirmation.
    Thank you all who have gone through the post , hope this will help those who have to map the same scenario at their end.
    Regards
    D Mohanty

  • Time Machine does not cope well with nearly-full disk

    Although Time Machine is supposed to delete old backups when the disk fills up, under some circumstances it does not cope well with a disk-full condition.
    Hardware: iMac (mid 2007)
    OS: Mac OS X version 10.6.8 (Snow Leopard)
    Processor: 2.8 GHz Intel Core 2 Duo
    Memory: 2 GB 667 MHz DDR2 SDRAM
    The other day I noticed that I had not had a successful Time Machine backup in over a day. Backups were taking a long time to run (4 hours or more), during which time the backup disk was chattering constantly and the disk showed "Estimating index time" in the Spotlight menu (even though I had excluded the disk from Spotlight indexing). When the disk finally stopped chattering, the backup finished with no visible errors but Time Machine continued to state that the last successful backup was yesterday. The backup disk was nearly full, but I had assumed that Time Machine was designed to cope with this situation by deleting old backups.
    A complete annotated log file of the incident and my observations can be found here: http://bentopress.com/backup.log.zip
    Here are some of the interesting entries from the system.log file:
    Sep  7 16:22:14 Bento-iMac com.apple.backupd[56515]: Starting standard backup
    Sep  7 16:22:14 Bento-iMac com.apple.backupd[56515]: Backing up to: /Volumes/My Backup Disk/Backups.backupdb
    Sep  7 16:23:07 Bento-iMac com.apple.backupd[56515]: No pre-backup thinning needed: 100.0 MB requested (including padding), 137.2 MB available
    Sep  7 16:25:49 Bento-iMac com.apple.backupd[56515]: Copied 32984 files (23.7 MB) from volume iMac HD.
    Sep  7 16:25:53 Bento-iMac com.apple.backupd[56515]: No pre-backup thinning needed: 100.0 MB requested (including padding), 108.4 MB available
    Sep  7 16:25:57 Bento-iMac KernelEventAgent[47]: tid 00000000 type 'hfs', mounted on '/Volumes/My Backup Disk', from '/dev/disk1s3', low disk, very low disk
    Sep  7 16:25:58 Bento-iMac mds[45]: (Normal) DiskStore: Rebuilding index for /Volumes/My Backup Disk/Backups.backupdb
    Sep  7 16:25:58 Bento-iMac KernelEventAgent[47]: tid 00000000 type 'hfs', mounted on '/Volumes/My Backup Disk', from '/dev/disk1s3', low disk
    Sep  7 16:25:59 Bento-iMac mds[45]: (Normal) DiskStore: Creating index for /Volumes/My Backup Disk/Backups.backupdb
    Sep  7 16:26:00 Bento-iMac KernelEventAgent[47]: tid 00000000 type 'hfs', mounted on '/Volumes/My Backup Disk', from '/dev/disk1s3', low disk, very low disk
    Sep  7 16:26:00 Bento-iMac mds[45]: (Warning) Volume: Indexing reset and suspended on backup volume "/Volumes/My Backup Disk" because it is low on disk space.
    Sep  7 16:26:02 Bento-iMac mds[45]: (Normal) DiskStore: Reindexing /Volumes/My Backup Disk/.Spotlight-V100/Store-V1/Stores/35367D91-8096-4D43-802B-A8658DBAB581 because no basetime was found.
    Sep  7 16:26:02 Bento-iMac mds[45]: (Normal) DiskStore: Rebuilding index for /Volumes/My Backup Disk/Backups.backupdb
    Sep  7 16:26:04 Bento-iMac mds[45]: (Normal) DiskStore: Creating index for /Volumes/My Backup Disk/Backups.backupdb
    Sep  7 16:26:14 Bento-iMac com.apple.backupd[56515]: Error: Flushing index to disk returned an error: 0
    Sep  7 16:26:14 Bento-iMac com.apple.backupd[56515]: Copied 776 files (17.3 MB) from volume iMac HD.
    Sep  7 16:26:15 Bento-iMac com.apple.backupd[56515]: Backup canceled.
    Note the "error: 0". Error code zero normally indicates success.
    There were also many, many, many copies of the following message, which occurred while Spotlight was trying to index the backup disk:
    Sep  7 17:28:40 Bento-iMac com.apple.backupd[58064]: Waiting for index to be ready (100)
    Here's what I think is happening: at the very end of the backup, after the files have been copied to the backup disk, mds resets indexing because the disk is now full. backupd's attempt to flush the index to disk then fails because indexing is suspended (error zero), so the backup is canceled. I suspect that this may occur when the new backup and the old one(s) being deleted are both small enough that the disk "packs" tightly, leaving no room for the index. If a large (>500MB) old backup had been deleted to make room for a small (<100MB) new one, the problem would not have occurred.
    I worked around the problem by switching Time Machine from the full disk to my Time Capsule. (I did not have this system backing up to the Time Capsule originally because I purchased the Time Capsule later for a different system.) The downside of this is that I lost access to older backups (though they are still accessible by right-clicking on Time Machine in the dock and selecting "Browse other Time Machine disks..."—however, I have not actually tried this). I'm also not completely happy having both my main systems backing up to the same hardware, but everything seems to be working well at this point and I do have a secondary backup system, using SuperDuper to clone the systems' disks to an external HD, which I do about once a month.
    Although I have worked around the problem, I am posting this in the hope that it will be useful to someone else, in the hope that someone at Apple will notice and perhaps improve the performance of Time Machine in this situation, and also in the hope that someone can suggest a solution that does not involve throwing a bigger disk at the problem. Is there any way to free up space on Time Machine's backup disk, for example by manually pruning older backups, either through Time Machine itself or via the Finder or command line?

    If you suspect that the disk being full is the problem, have you tried deleting some of the old backup data on the TM disk?
    When in TM, go back to the past a long way, right click on a file/directory and select "delete backup". Do this on some big files you have (so that you get more space per removal) that you change all the time (so you'll still have recent backups).

  • What is the "BW shell creation package for BW system" in TDMS?

    I see for lower version of SAP systems such as 4.6 and BW(e.g.3.0, etc), there is a "shell creation package " for those
    types of systems.
    Could you explain what is that for?
    Thanks!

    1) where to find the info about the copy sequence such as: first shell creation, then initial package or whatever, then ...?
    > TDMS solution operation guide will be helpful, the same is available at SAP service market place. In short the sequence is  -
    first shell (only if the repository is not in sync) then initial package (ex. TDTIM) and then refresh packages (as an when needed)
    2) for HR we do not have shell creation, what we have is "initial package for master data and customizing" , the next step is "ERP initial package for HCM personel dev. PA & PD". Why we do not have shell creation here?
    > For HR as only few objects are transfered the need of a full shell system does not arise. only the object to be transfered are synced.
    3) Will TDMS replace system copy for ERP, BW, CRM completely?
    > Shell will not replace them but we may say that it will supplement it. When you need a complete copy means repository along with application data then you need to go for system copy but if you only need repository to be copied from source to destination then shell is helpful. Also it should be noted that Shell is an TDMS package whereas System copy is a SAP standard tool.
    I hope the above response is helpful.
    Regards,
    Pankaj

  • ERP shell creation package

    For any ERP related package (except client deletion package), do we have to run "ERP shell creation package" at the very beginning? 
    The reason I ask this question is as follows:
    To bring HR configuration to the target, we have 2 different methods:
    1) client copy
    2) TDMS package called "ERP initial package for MDC"
    Therefore I have the impression that at least method 1) does not need a shell creation.
    I am not sure whether method 2) needs a shell creation first or not.
    Could you help explain?
    Is there any other exception for shell creation?
    Thanks!

    Hello
    No shell is not a mandatory step before TDMS ERP related package. General requirement of TDMS packages (time base or object based) is that the repository on both the sender and receiver should be identical. So in case the repository on both the system is already in sync then Shell is not needed. Also if there are minor differences in the repository then too it may not be required to do shell if the same inconsistencies could be removed manually or by importing some missing transports.
    If the differences between the two systems are many or if the receiver system needs to be build from the scratch then Shell is required.
    I hope that helps
    Best regards
    Pankaj

  • Shell creation issue

    Dear experts, I've got a question regarding shell creation of TDMS.
    I've finished the export from ECC IDES system (SR3). The size of export files are over 30GB... is it normal? Can I reduce the data? Since it'll take me more than 200GB disk in target system.
    Is it been setup in the step of "Determine Tables to be Excluded from the Export"?
    Which data can be reduced in the table regarding IDES shell creation?
    Thanks!

    Hi Jett
    Your first question is not very clear to me -
    Seems that, the data folder of export is still the same as before. Is it normal?
    >I think you mean to say that the size of the data folder of export is of same size as before. Yes that is normal. However i feel the doubt that remains in your mind is that you want to be sure that its doing a reduced transfer and not a full transfer. Well you cant make a decision based on the size of the export folder. To decide that you can do the following -
    Pick one excluded table and in the exports folder try to search for its existence in the .TOC files. If you find it in any of the .TOC files then it indicates that a full transfer is happening instead of a reduced one. If you dont find it in .TOC but you find that excluded table in .STR files that indicates that a reduced transfer is happening and everything is fine. The existence of the excluded table in .STR file indicates that the structure information of the excluded table will be transferred although the data is not getting transferred.
    Regarding your second set of queries -
    Please understand that there are two parts to shell creation one is excluding the data to be transferred for a set of tables, so no data at all get transferred to the target system for all the excluded tables. Second part is that how much filespace (on target system) should be allocated to tables which are there in the excluded tables list, by default the import process will assign same space to a table in target system as the space allocated to it in the source system. So if a table for which no data is getting transferred was of size say 10 MB in sender so the import process will allocate a filesize of 10 MB to the table in target system also although it will be blank. To avoid this and to reduce the overall size of target system, we reduce the sizes that should be allocated to the excluded tables in the target system in the activity "Determin and Modify Size of Tables".
    Now for the question what do the ratios mean and on what basis do we assign the ratios-
    >Say for example Ratio20 means that the import process will assign 20% of the original size (in sender system) to the table in the target system. Regarding on what basis are the ratios determined - The ratios are determined on the basis that how much data do you expect each table to hold later on. So this depends on which process of TDMS (vis TDTIM, TDMDC, TDTCC) do you want to execute after shell creation. In case of TDTIM process type it may also depend on the period for which you want to transfer the data. For this reason we have provided various templates (vis TDTIM, TDMDC etc) in the activity "Determin and Modify Size of Tables". So we have tried to build this intelligence into the system. So if you want to excute a TDMDC scenario after shell creation then you should choose the template TDMDC in the activity "Determin and Modify Size of Tables" and the system will automatically assign defferent ratios to different tables.
    In my last post i had suggested that if you want to further reduce the size of your target system (although the assigned template will automatically reduce the size of the target system) you can do the same by further reducing the ratios for the tables for which you dont expect much data to be transfered. This functionality of altering ratios for certain table or set of tables is available in the activity "Determin and Modify Size of Tables". SAP already delivers ratios which will be optimum for different tables, if you want to override those setting and provide your own ratios then it should be done at your own risk.
    I hope this post explain you the entire concept. In case you have some doubt feel free to write again.
    Regards
    Pankaj.

  • TDMS shell creation  - R3szchk running longtime

    Hi,
    I am running a shell creation package, in Export ABAP phase R3szchk running long time and there is nothing written to log file
    -rw-r-----    1 root     sapinst           0 Oct  4 19:56 /tmp/sapinst_instdir/ERPEhP4/LM/COPY/DB6/EXP/CENTRAL/AS-ABAP/EXP/R3szchk.exe.log
    where else I can check to find out the issue?
    Thanks
    Din

    Thanks Pankaj for the valuable information.
    ---The size of exports may depend on the data in your client independent tables. There are certain client indepedent table which hold a lot of data for example STXL, STXH and DYNP* tables. It also depends on the volume of custom development that you have done in your systems. Most of the client independent tables are exported in full duing shell.
    I could understand that. I ran an optional activity u201CCheck size of large tables (optional)u201D  in TDMS Shell package, it returns witj some large tables, such as
    STXH- 190,854,900 KB
    STXL- 26,819,840 KB
    DYNPSOURCE- 20,452,251 KB, some of these are independent and some are dependent tables. For ex. STXH is dependent table with size 190GB, does it means that this table will be exported in full during shell export?
    However, you need to be sure that because of certain mistakes you are not exporting the complete system (including application data). As stated by Marcus, you need to make sure that you are using Migration monitor in manual mode and that you configure the Migration monitor control file with correct parameter values.
    I think I configured import monitor control file correctly and started it manually.
    Here is the import monitor startup log:
    Export Monitor is started.
    CONFIG: 2011-10-05 00:49:22
    Application options:
    dataCodepage=4102
    dbType=DB6
    ddlFile=/shell_install/DDLDB6.TPL
    ddlMap=
    exportDirs=/shell_export/ABAP
    ftpExchangeDir=
    ftpExportDirs=
    ftpHost=
    ftpJobNum=3
    ftpPassword=*****
    ftpUser=
    host=
    installDir=/shell_install
    jobNum=6
    loadArgs=
    mailFrom=
    mailServer=
    mailTo=
    monitorTimeout=30
    netExchangeDir=
    orderBy=name
    port=
    r3loadExe=
    server=
    taskArgs=
    trace=all
    tskFiles=yes
    Thanks
    Din

  • Help, I can't sync any music to my iPhone 5s or iPad 2 ! :( it says it's finished sync but the music is greyed out and not on my devices.. I've tried unchecking the music and doing it manually. Nothing works. I took it to Apple store Robina no help..

    Help, I can't sync any music to my iPhone 5s or iPad 2 ! it says it's finished sync but the music is greyed out and not on my devices.. I've tried unchecking the music and doing it manually. Nothing works. I took it to Apple store Robina no help..

    So I have found a common denominator to this phenomenon. After having the same problem, I started to recopy my playlist of 500+ songs in groups of 10-20, 30+, 75+, and then over a 100+. Any music I have purchased from iTunes won't copy, but everything else did. Unsure if I made a playlist called "Purchased" at some point or this is an iTunes default, I also noticed it had some of the songs contained in my other playlist and was showing on my iPhone. Regardless of what I did such as dragging music from that playlist to the other and clearing songs from that playlist, I removed it altogether from the iPhone itself vs. doing through iTunes. After that, music copied fine to my playlist.
    So, my theory is that iTunes is getting tripped up on purchased tracks already showing on the phone and all songs are showing grayed out/dotted circle because iTunes can't resolve the conflict with purchased tracks.
    Hope this makes sense and is helpful.

Maybe you are looking for

  • How can I set my time capsule to back-up only once a day?

    How can I set my time capsule to only back-up once a day.  It appears to be backing up several times a day.  Not sure there is a set schedule for the back-ups and they take a great deal of time; over an hour each.

  • New GL, Profit Centre Reporting, Settlement

    HI, Our company system has ECC6, New GL, but no enhancement packs. I notice that Plant Maintenance Orders, once settled, do not generate an accounting document, but only a Profit Centre and controlling document. Therefore the GL will not agree with t

  • Can I access printer connected to windows PC on my airport network

    I have an airport extreme network with 3 MACs and 1 Dell laptop (yes I had to get a used Dell to run Mural Creator and TileCreator). Three printers (a dye sublimation printer, a Canon i9900, and an HP laserjet 3380) are connected to my airport extrem

  • IMAC G5 LCD screen

    I am on my third LCD screen and I am presently waiting for another LCD screen which of course is back ordered. I have been told that this is NOT a common problem. I have approx 100 colored lines on my screen at this time. Has anyone had any success w

  • Hi   lsmw

    hi 2 questions 1.  when is lsmw is preferred, when BDC is preferred to migrate data. 2.  could any one pl send me steps to migrate data using LSMW-bapi method, i know direct method, but i do not know bapi method through lsmw. thanx rocky