TDMS for reduce data volume

Hi Experts,
We have implemented TDMS and installed in all systems.
Our system landscape is like - DEV, QUAL and PRDN (ECC system)
we would like to use TDMS for reduce data volume in non production systems.
We have one more project system which is in the transport line.
Could any one please inform which is the best scenario to use TDMS for reducing data volume in non-production systems without much distrubance in production system.
Also how it is taking care of development system, since the customization will be overwritten in DEV system.
Thank you in advance.
Shylesh

Hi Shylesh,
you can use several packages to reduce your data. If you need application data you can use a time slice (package TDTIM), if you don't need it you can use the master data and customizing option (TDMDC).
I suppose you are going to transfer from one client to another so best thing to do is to create a new client on dev if you still need the old customizing settings.
Concerning the impact of the productive environment. Can you give me an idea on the database size ?
Regards,
Eddy

Similar Messages

  • Preconversion: Reducing Data Volume

    Hi,
    During Unicode Preconversion we have this strongly recommended step "Reducing Data Volume". If any one has implemented this step can you tell me how much time (average) it will take to finish this step?
    Regards,
    Ravikanth

    Hi Ravikanth,
    This step is optional step. It is recommended by SAP because it can reduce Unicode Conversion runtime and downtime since data in tables is reduced.
    Also, Data archiving is itself a separate project and time of this activity depends on tables you want to archive and how much data you want to archive. You can reduce data yourself by deleting old background job logs, BDC logs, old spools, old abap dump logs.

  • Is Multimaster Timesten replication a good option for huge data volumes?

    Hi,
    There are 3 timesten nodes in our production setup .There will be around 5 million rows in each node initially which will gradually increase to about 10 million.Once our application moves to production, there will be around 50-70 transactions per second in each node , which need to be replicated on to the other node.
    Initially we thought of going with Active-Standby-Subscriber replication.However in this case if active and standby node go down,then it will be a site failure case.So is Active-Active (Multimaster replication) configuration a good option ? Will data collision happen when replication happens in both directions?
    Thanks in advance.
    Nithya

    Multi-master replication is rarely a good idea. You will get data collisions unless you rigorously partition the workload. Conflict detection and resolution is not adequate to guarantee consistency over time. Recovery back to a consistent state after a failure is complex and error prone. I'd strongly advise against a multi-master setup, especially for a high volume system.
    You seem to be concerned that 2 out of the 3 systems may fail resulting in a site outage. The likelihood of that is small if you have set things up with separate power etc. With the A/S pair based approach you would still have query capability if the two master systems failed. The chances of all 3 systems failing is not that much less than of just 2 failing in reality I would say (depending on the reason for the failure).
    Chris

  • Converting data volume type from LINK to FILE on a Linux OS

    Dear experts,
    I am currently running MaxDB 7.7.04.29 on Red Hat Linux 5.1.  The file types for the data volumes were
    initially configured as type LINK and correspondingly made links at the OS level via "ln -s" command. 
    Now (at the OS level) we have replaced the link with the actual file and brought up MaxDB.  The system
    comes up fine without problems but I have a two part question:
    1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
        (might we encounter a performance problem).
    2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    Your feedback is greatly appreciated.
    --Erick

    > 1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
    >     (might we encounter a performance problem).
    Never saw any problems, but since I don't have a linux system at hand I cannot tell you for sure.
    Maybe it's about how to open a file with special options like DirectIO if it's a link...
    > 2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    There's no 'converting'.
    Shutdown the database to offline.
    Now logon to dbmcli and list all parameters there are.
    You'll get three to four parameters per data volume, one of them called
    DATA_VOLUME_TYPE_0001
    where 0001 is the number of the volume.
    open a parameter session and change the value for the parameters from 'L' to 'F':
    param_startsession
    param_put DATA_VOLUME_TYPE_0001 F
    param_put DATA_VOLUME_TYPE_0002 F
    param_put DATA_VOLUME_TYPE_0003 F
    param_checkall
    param_commitsession
    After that the volumes are recognizes as files.
    regards,
    Lars
    Edited by: Lars Breddemann on Apr 28, 2009 2:53 AM

  • TDMS for RETAIL/STORE

    Hello Expert Zone,
    Finally I started working on the CRM extraction, and here is my scenario.
    Source system: 300 warehouses worth of member data.
    Target: Blank/empty.
    Objective: Copy /extract member/membership data from source by warehouse type ( 320 - 340 only ), to bring in 20 warehouse worth of data, and nothing else..
    What template should be used in this case? appreciate if someone can share your inputs with step by step procedure.
    Thanks,
    V.

    Hello Vikram,
    I see this query related to the discussion we had in your previous thread:
    http://scn.sap.com/message/15025928#15025928
    If i understand correctly, you have CRM loyalty management coupled with SAP POS.
    What i understand is that there is SAP Retail system and there is SAP CRM.
    For CRM data reduction, TDMS provides date criteria. TDMS CRM does not reduce CRM loyalty management data, which means that the data in these tables will be fully copied across from sender to receiver system.
    We do have TDMS Retail solution where we do reduction based on below.
    1) Store/Site(In retail terminology)
    2) Customer number
    3) Date
    Let me know if i have answered your question.
    Best Regards,
    Madhavi

  • How to measure and limit the data volume?

    Morning.
    I need a tool to measure and limit the data volume of internet usage. My internet tariff allows a maximum of 5 GB data volume per month. Exceeds my usage that amount the bandwidth will reduce to only 64 kB/s or the exceeding data volume must be paid extraordinarily expensive.
    Do you know a tool that measures the data volume in a given time period and can alert or limit the internet connection for instance if the data volume at the half of the months has exceeded more than the half of the data volume for the entire month?
    Kind regards, vatolin

    You could generate large amount of data and then use any SNMP Viewer (BMC Dahsboard, Solarwinds, Nagios, CiscoWorks etc.) to see the throughput of the interfaces at peak. But why bother? Cisco has been commented by numerous research firms (Gartner etc.) to be very precise about their stated throughputs.
    Regards
    Farrukh

  • Data Volume - Calculation performance

    Hi,
    We are experience degrading calculation performance as data volume increases.
    We are implementing BPC 7.5 SP05 NW (on BW 7.0. EHP1).
    An allocation script that ran in 2 minutes when the database contained only 800,000 records, took over 1 hour after the database was populated with a full year of data.
    All logics have been written to calculate on a reduced defined scope but it does not seem to improve the execution time. When I check the formula log, the scope is respected.
    The application is not that large either: 12 dimensions, the largest containing 300 members and 3 hierarchical levels.
    We optimize the database but to no avail.
    What can be done to optimize performance? Are there any technical settings in BPC or BW that can be fine-tuned?
    Thanks,
    Regis

    Hi Ethan,
    Take a look at one of the allocation script: http://pastebin.com/TA16xCd3
    We are testing RUNLOGIC but we are facing some problems in two situations:
    - passing the DM package variable to the RUNLOGIC script
    - using a passed variable in the called script
    The DM prompts for 3 selections: ENTITY, TIME and CATEGORY.
    The RUNLOGIC script:
    *SELECT(%DIVISIONS%,"[ID]",DIVISION,"[LEVEL]='DIV' AND [STORECOMMON]<>'Y'")
    *SELECT(%BRANCHES%,"[ID]",BRANCH,"[BRANCHTYPE]='STORE'")
    *START_BADI RUNLOGIC
         QUERY=OFF
         WRITE=ON
         LOGIC=ALLOC_DIV_ACTUAL_S.LGF
         DIMENSION ENTITY=C1000
         DIMENSION TIME=FY10.MAY
         DIMENSION CATEGORY=ACTUAL
         DIMENSION DIVISION=%DIVISIONS%
         DIMENSION DIVISION=%BRANCHES%
         CHANGED=ENTITY,TIME,CATEGORY,DIVISION,BRANCH
         DEBUG=ON
    *END_BADI
    In ALLOC_DIV_ACTUAL_S.LGF, we are using a %DIVISION_SET% variable. At the time of validating, we get a message "Member "" does not exist".
    When we run the package, it fails with the same error message:
    An exception with the type CX_UJK_VALIDATION_EXCEPTION occurred, but was neither handled locally, nor declared in a RAISING clause
    Member "" not exist
    Thanks
    Regis

  • TDMS For BI

    Hi,
    I have done tdms for r/3 ,4 iterations for tdtcc,tdtim,bpl.Now my client require tdms solution for bi.Recently i had workshop with sap for tdms erp not bi from my client.Could anyone 
    what type of quetions will popup from bi functional people and answers for that questions.I have to give some presention for clinet before i start hands-on.
    Thanks in Advance

    Process Tree for TDMS for BI is much similar to that of ERPs.
    -  Master Data is completely transferred.
    - Transactional data is reduced.
       Only Active ODS tables are transferred by default . New tables and change Log tables are not copied.
       For Infocubes , only E and F fact tables are reduced, Dimensional tables are completely copied.
    - PSA data is not transferred by default.
    - Technical content is also not transferred.
    As of now only Time based reduction is possible with TDMS BI.  Kind of TDTCC is not currently provided with TDMS BI.
    It is possible to copy selected infocubes but appending to existing client is not possible. When ever we run any TDMS package,data in receiver client is deleted and then copied data is written to it.
    Regards
    Isha

  • Best path for Core Data implementation

    Hi all,
    First post so pls go easy!
    I'm a seasoned Windows/Web App developer who has recently (3-4m) discovered Mac, Cocoa and Obj-C. I've been buried in Apple docs, Hillegass and Dalrymple books for some time now trying to get to the point where I'm ready to build the Cocoa project that I have in mind, which looks to be well suited to a Core Data based Application driven by SQLite. The data model is reasonably complex with around 10 inter-related entities which must retain data integrity.
    Anyway, onto the question... historically I would have built a class library to encapsulate the use of the data model and accessed that class library when events fired to do what needs to be done. The Cocoa solution appears to support this - presumably though creating my own framework that is then referenced by the Cocoa application. I can see though that there is another path where I skip the encapsulation and build a Core Data based Cocoa app directly.
    At a high level - is there a preference between the two approaches?
    The latter seems well documented/supported but I am synically thinking that is because it is more straightforward and clearly faster, are there other advantages such as performance.
    For background the app when running will follow similar form to Mail.app in terms of multi-view with some data tables and custom views in play.
    Thanks,
    Chris

    Hi K T - thanks.
    Bottom line I think is that encapsulation is a safety blanket that I probably need to let go of. CD ticks boxes on a theory level, subject to implementation not being too heavy it seems like the logical step. The only consideration was to encapsulate a framework built on the known methods, then move the framework to CD under the covers when ready - that seems a bit gutless though and almost definitely inefficient time wise. I guess that there is little point in encapsulating CD from the outset - feel like it just adds unnecessary work in addition to some degree of performance overhead?
    {quote}Are you looking for flexibility or performance, by the way?{quote}
    Performance - the data model is unlikely to change once bedded in beyond addition of properties etc very infrequently. The app is likely to need to handle many tens of thousands of rows of data (albeit small in terms of data volume per row) for some users, and my conclusion from the documentation was that SQLite is the most appropriate route if committing to CD where data volume and/or relationships are plentiful. Is that a fair assessment?
    {quote}Are you looking to mimic traditional application interfaces or to adopt trends that are currently unfolding?{quote}
    The app I plan to build desperately needs to be brought up to date - possibly even beyond the advanced UI that AppKit seems to offer by default IMO. That said I don't want to overcommit on the extent of the build, but I do want to turn heads without just slapping coverflow or similar in for the sake of it. If you have any references or examples for doable leading edge UI design on OSX they would be gratefully received.
    Thanks again for your help - really appreciate it.
    Chris

  • Error: "This backup is too large for the backup volume."

    Well TM is acting up. I get an error that reads:
    "This backup is too large for the backup volume."
    Both the internal boot disk and the external baclup drive are 1TB. The internal one has a two partitions, the OSX one that is 900GBs and a 32GB NTFS one for Boot Camp.
    The external drive is a single OSX Extended part. that is 932GBs.
    Both the Time Machine disk, and the Boot Camp disk are excluded from the backup along with a "Crap" folder for temporary large files as well as the EyeTV temp folder.
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    This happened after moving a large folder (128GB in total) from the root of the OSX disk over to my Home Folder.
    I have reformated the Time Machine drive and have no backups at all of my data and it refuses to backup!!
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    Some screenshots:
    http://www.xcapepr.com/images/tm2.png
    http://www.xcapepr.com/images/tm1.png
    http://www.xcapepr.com/images/tm4.png

    xcapepr wrote:
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    TM makes an initial "estimate" of how much space it needs, "including padding", that is often quite high. Why that is, and Just exactly what it means by "padding" are rather mysterious. But it does also need work space on any drive, including your TM drive.
    But beyond that, your TM disk really is too small for what you're backing-up. The general "rule of thumb" is it should be 2-3 times the size of what it's backing-up, but it really depends on how you use your Mac. If you frequently update lots of large files, even 3 times may not be enough. If you're a light user, you might get by with 1.5 times. But that's about the lower limit.
    Note that although it does skip a few system caches, work files, etc., by default it backs up everything else, and does not do any compression.
    All this is because TM is designed to manage it's backups and space for you. Once it's initial, full backup is done, it will by default then back-up any changes hourly. It only keeps those hourly backups for 24 hours, but converts the first of the day to a "daily" backup, which it keeps for a month. After a month, it converts one per week into a "weekly" backup that it will keep for as long as it has room
    What you're up against is, room for those 30 dailies and up to 24 hourlies.
    You might be able to get it to work, sort of, temporarily, by excluding something large, like your home folder, until that first full backup completes, then remove the exclusion for the next run. But pretty soon, it will begin to fail again, and you'll have to delete backups manually (from the TM interface, not via the Finder).
    Longer term, you need a bigger disk; or exclude some large items (back-them up to a portable external or even DVD/RWs first); or a different strategy.
    You might want to investigate CarbonCopyCloner, SuperDuper!, and other apps that can be used to make bootable "clones". Their advantage, beyond needing less room, is when your HD fails, you can immediately boot and run from the clone, rather than waiting to restore from TM to your repaired or replaced HD.
    Their disadvantages are, you don't have the previous versions of changed or deleted files, and because of the way they work, their "incremental" backups of changed items take much longer and far more CPU.
    Many of us use both a "clone" (I use CCC) and TM. On my small (roughly 30 gb) system, the difference is dramatic: I rarely notice TM's hourly backups -- they usually run under 30 seconds; CCC takes at least 15 minutes and most of my CPU.

  • "Backup is too large for the backup volume" error

    I've been backing up with TM for a while now, and finally it seems as though the hard drive is full, since I'm down to 4.2GB available of 114.4GB.
    Whenever TM tries to do a backup, it gives me the error "This backup is too large for the backup volume. The backup requires 10.8 GB but only 4.2GB are available. To select a larger volume, or make the backup smaller by excluding files, open System Preferences and choose Time Machine."
    I understand that I have those two options, but why can't TM just erase the oldest backup and use that free space to make the new backup? I know a 120GB drive is pretty small, but if I have to just keep accumulating backups infinitely, I'm afraid I'll end up with 10 years of backups and a 890-zettabyte drive taking up my garage. I'm hoping there's a more practical solution.

    John,
    Please review the following article as it might explain what you are encountering.
    *_“This Backup is Too Large for the Backup Volume”_*
    First, much depends on the size of your Mac’s internal hard disk, the quantity of data it contains, and the size of the hard disk designated for Time Machine backups. It is recommended that any hard disk designated for Time Machine backups be +at least+ twice as large as the hard disk it is backing up from. You see, the more space it has to grow, the greater the history it can preserve.
    *Disk Management*
    Time Machine is designed to use the space it is given as economically as possible. When backups reach the limit of expansion, Time Machine will begin to delete old backups to make way for newer data. The less space you provide for backups the sooner older data will be discarded. [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    However, Time Machine will only delete what it considers “expired”. Within the Console Logs this process is referred to as “thinning”. It appears that many of these “expired” backups are deleted when hourly backups are consolidated into daily backups and daily backups are consolidated into weekly backups. This consolidation takes place once hourly backups reach 24 hours old and daily backups reach about 30 days old. Weekly backups will only be deleted, or ‘thinned’, once the backup drive nears full capacity.
    One thing seems for sure, though; If a new incremental backup happens to be larger than what Time Machine currently considers “expired” then you will get the message “This backup is too large for the backup volume.” In other words, Time Machine believes it would have to sacrifice to much to accommodate the latest incremental backup. This is probably why Time Machine always overestimates incremental backups by 2 to 10 times the actual size of the data currently being backed up. Within the Console logs this is referred to as “padding”. This is so that backup files never actually reach the physically limits of the backup disk itself.
    *Recovering Backup Space*
    If you have discovered that large unwanted files have been backed up, you can use the Time Machine “time travel” interface to recovered some of that space. Do NOT, however, delete files from a Time Machine backup disk by manually mounting the disk and dragging files to the trash. You can damage or destroy your original backups by this means.
    Additionally, deleting files you no longer wish to keep on your Mac does not immediately remove such files from Time Machine backups. Once data has been removed from your Macs' hard disk it will remain in backups for some time until Time Machine determines that it has "expired". That's one of its’ benefits - it retains data you may have unintentionally deleted. But eventually that data is expunged. If, however, you need to remove backed up files immediately, do this:
    Launch Time Machine from the Dock icon.
    Initially, you are presented with a window labeled “Today (Now)”. This window represents the state of your Mac as it exists now. +DO NOT+ delete or make changes to files while you see “Today (Now)” at the bottom of the screen. Otherwise, you will be deleting files that exist "today" - not yesterday or last week.
    Click on the window just behind “Today (Now)”. This represents the last successful backup and should display the date and time of this backup at the bottom of the screen.
    Now, navigate to where the unwanted file resides. If it has been some time since you deleted the file from your Mac, you may need to go farther back in time to see the unwanted file. In that case, use the time scale on the right to choose a date prior to when you actually deleted the file from your Mac.
    Highlight the file and click the Actions menu (Gear icon) from the toolbar.
    Select “Delete all backups of <this file>”.
    *Full Backup After Restore*
    If you are running out of disk space sooner than expected it may be that Time Machine is ignoring previous backups and is trying to perform another full backup of your system? This will happen if you have reinstalled the System Software (Mac OS), or replaced your computer with a new one, or hard significant repair work done on your exisitng Mac. Time Machine will perform a new full backup. This is normal. [http://support.apple.com/kb/TS1338]
    You have several options if Time Machine is unable to perform the new full backup:
    A. Delete the old backups, and let Time Machine begin a fresh.
    B. Attach another external hard disk and begin backups there, while keeping this current hard disk. After you are satisfied with the new backup set, you can later reformat the old hard disk and use it for other storage.
    C. Ctrl-Click the Time Machine Dock icon and select "Browse Other Time Machine disks...". Then select the old backup set. Navigate to files/folders you don't really need backups of and go up to the Action menu ("Gear" icon) and select "Delete all backups of this file." If you delete enough useless stuff, you may be able to free up enough space for the new backup to take place. However, this method is not assured as it may not free up enough "contiguous space" for the new backup to take place.
    *Outgrown Your Backup Disk?*
    On the other hand, your computers drive contents may very well have outgrown the capacity of the Time Machine backup disk. It may be time to purchase a larger capacity hard drive for Time Machine backups. Alternatively, you can begin using the Time Machine Preferences exclusion list to prevent Time Machine from backing up unneeded files/folders.
    Consider as well: Do you really need ALL that data on your primary hard disk? It sounds like you might need to Archive to a different hard disk anything that's is not of immediate importance. You see, Time Machine is not designed for archiving purposes, just as a backup of your local drive(s). In the event of disaster, it can get your system back to its' current state without having to reinstall everything. But if you need LONG TERM storage, then you need another drive that is removed from your normal everyday working environment.
    This KB article discusses this scenario with some suggestions including Archiving the old backups and starting fresh [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    Let us know if this clarifies things.
    Cheers!

  • Open hub for Inventory data

    Experts,
    We have a requirement where we need to push out Inventory data from BW to third party systems.
    We have a daily cube and monthly snapshot cube implemented.
    Now, there are fields that the third party systems require including Movement type. There are 500K movements everyday so putting this field in the cube would make the cube huge. Can we have another DSO loaded only by 2LIS_03_BF that supplies data to the third party systems via Open Hub in addition to the model we currently have?
    Would this be a good design or rather would it work(in theory)??

    Hi,
    Yes you are thinking in correct direction. Adding movement type to cube will unneccessarily increase the data volume.
    Cubes are not meant for detailed data.
    I would suggest to go ahead with the DSO approach. The open hub will work with DSO.
    Regards,
    Geetanjali

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • Why to use chnage  pointers for  master data idoc why not  the  change idoc

    Hi Gurus,
    I have  one doubt about  Idoc.
    When changes to master  data has to be sent Change pointers are configured and  used. (CDPOS & CDHDR). In case of  the  transaction data change change idoc  is  used as is the case with orders (we  use ORDCHG for  ORDERS message type to send  the  chnage details to a  order which was already sent to other system.
    Why we can't use  change  idoc or  message type whatever, instead of  Chnage pointers in case of  master data or  vice versa in case  of transaction data.
    Your valuable  input  will be rewarded with suitable  points!!
    -B S B.

    Hi,
    It was a good question ... SAP seems never designed the change pointer to handle transactional data IDocs. Only for master data IDocs distribution purposes.
    It may due to volume data ... the transactional data can change many times within a day compare with master data which hardly change or once a while.
    Regards,
    Ferry Lianto

  • Changing time-out for scheduled data refresh

    Using a Power Query connection, is it possible to extend the time-out time for scheduled data refreshes? The amount of data to be retrieved is rather limited, but there's thousands of rows (NAV server).
    If not, any suggestions to how to reduce latency?
    Thanks.

    Thorm,
    Is this still an issue?
    Thanks!
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

Maybe you are looking for