Data storage for CUA activities in tables.

HI,
As su01 reads the values from tables usr04/ust04.
activities and transaction related to cua will store or read the data from which table and which program will run in background.
how these below  tables supports cua activites
D340L
d341L
D342L
kindkly suggest.
anil

Hi Anil,
Please Refer this:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/fe4f76cc-0601-0010-55a3-c4a1ab8397b1

Similar Messages

  • SR830 data storage for rs232

    Hi,
    I found a LabVIEW code here which can storage data from input signal (e.g.A/I)in SR830 and transfer it by the GPIB.
    I used function generator to generate a sine wave with frequency:30Hz& Vp-p:100mVolt , conencting to SR830.
    Clearly ,I got a correct result when i used the code by GPIB communication interface(left graph).
    Than,I tried to change the communication interface RS232 to accept the same signal,but it has mistakes(right graph).
     Here is original code "data storage for GPIB"
    Here is rewrite code for RS232 by myself,but what's wrong i did?
    Attachments:
    SR830 DATA STORAGE EXAMPLE.VI ‏54 KB
    scan test1.vi ‏43 KB

    The SR830 expects three integer arguments (separated by commas) for the TRCA? query:
    * i = 1 or 2 specifies which buffer to download from
    * j >= 0 specifies the starting point
    * k>=1 specifies how many points to download
    If you know k, you can calculate the exact number of bytes in the response. For your code, which downloads 4000 points at a time, that will be something like 60 kB (if memory serves, the response in ASCII transfer mode is 15 bytes/value). Make sure that you're not hitting any timeout or serial buffer size limits with a transfer of that size.  
    Edit: You have your bitrate set to 9600 baud (1200 bytes/second) and a 10 second timeout. That will read 12 kB before timing out, or 1/5th of your transfer. The 830 supports baud rates up to 19,200, which will help, but you'll also need either a longer timeout or to transfer your data in smaller chunks. 

  • Organizational data determination for mkt. activities

    We need organizational data determination for mkt. activities created from campaigns. In CRM we have only the Marketing part of  the clients organization, using Marketing scenario in PPCOMA_CRM.
    If we leave org. unit field of activity blank, the activity has a red error light.
    The responsable person is the one from the campaign. It is responsable of en Marketing unit (that was assigned to campaign).
    The possible solutions for our cliente could be 1) org data determination from preceding document or 2) org data determination from the Mkt organization of the responable person.
    Concerning 1) There is no "natural" way to assign the preceding
    document (campaign). Am I right?
    Concerning 2) The only way to determine the an unit, was choosing Organizational Model Determin. Rule 10000148 to perfile ZDUOCTEST and later changing the scenario of rule 10000148 from sales to marketing. This obviosly is not standard. As rule and perfile now have diferent scenarios. What would be the standard solution?
    Thanks for help.
    Cristina

    hi
    please do not get confused by the terminology of the standard solution,what you are doing by changing the scenariop from sales to marketing is actually a standard solution,
    as an activity is generated in response to the campiang or survey being submitted,and it is always a sales scenario ,since further activity application is in martketing scenario ,so what you are doing is changing the scenario from sales to marketing so that from those activities tyou can generate the org data,this is the standard and it goes this way only
    if you want to work other way around then just use the preceding document access sequence in the scenario like then you have to use the campaign data instead of the activity generated in response to the campaign,
    best regards
    ashish

  • Can we use SQL Server as a primary data storage for SharePoint 2013?

    Is it possible to use external SQL Server as a primary data storage for SharePoint 2013?
    Having this implemented we can generate reports, query data and it would become very very powerful to use.
    Or
    Is there a way to keep on using existing content databases and using external SQL Server as a secondary data storage at the same time. So I want it to be like redundant data in SQL Server.
    Thanks 

    Hi,
    Not sure if I understand your question correctly, SharePoint has its own content database storing SharePoint list data, we can use SSRS/SSAS integated with SharePoint mode to use sharepoint list as datasource to generate reports.
    http://technet.microsoft.com/en-us/library/bb326358(v=sql.105).aspx
    http://sqlmag.com/business-intelligence/delivering-bi-through-sql-server-and-sharepoint
    http://www.mssqltips.com/sqlservertip/2068/using-a-sharepoint-list-as-a-data-source-in-sql-server-reporting-services-2008-r2/
    Thanks
    Daniel Yang
    TechNet Community Support

  • Offline data capture for SQLServer2k shows 0 tables

    Hi,
    I'm evaluating OMWB Offline Data Capture for SQL Server 2000 migration.
    I ran OMWB_OFFLINE_CAPTURE.BAT script for SQL Server 2000. It seams like the DAT files are generated and no error messages appear. The problems occure when I start the OMW and try to "Capture Source Database".
    I specify the directory, where the generated DAT files reside, the DAT files appear in the file list and the status is AVAILABLE. But when I run the Capture with the Oracle Model Creation, I see among the LOG messages that "...Tables Mapped: 0...".
    I created TEST database with the table [tab] in it. In generated SS2K_SYSOBJECTS.dat file there is a row for this table:
    tab     \t     2041058307     \t     U      \t     1     \t     1     \t     1610612736     \t     0     \t     0     \t     0     \t     2006/09/26      \t     0     \t     0     \t     0     \t     U      \t     1     \t     67     \t     0     \t     2006/09/26      \t     0     \t     0     \t     0     \t     0     \t     0     \t     0     \t     0     \t     \r\n
    The rest objects are not in the Oracle Model too (I believe the user sa must have been created too).
    Please, anybody help with this problem.
    Pavel Leonov, Consultant
    Ispirer Systems Ltd.
    SQLWays - Data, schema, procedures, triggers conversion to Oracle, DB2, SQL Server, PostgreSQL, MySQL
    http://www.ispirer.com

    I changed the separators back to the default. But the Oracle Model still is not created. Still the same problem, there are no tables at all in the source database.
    Here is how the row for the table is specified in the SS2K_SYSOBJECTS.dat file:
    tab     ?     2041058307     ?     U      ?     1     ?     1     ?     1610612736     ?     0     ?     0     ?     0     ?     2006/09/26      ?     0     ?     0     ?     0     ?     U      ?     1     ?     67     ?     0     ?     2006/09/26      ?     0     ?     0     ?     0     ?     0     ?     0     ?     0     ?     0     ?     ?
    Here is some information from the log:
    Type: Information
    Time: 26-09-2006 15:13:56
    Phase: Capturing
    Message: Row delimiter being used for offline capture is ¤
    Type: Information
    Time: 26-09-2006 15:13:56
    Phase: Capturing
    Message: Column delimiter being used for offline capture is §
    Type: Information
    Time: 26-09-2006 15:13:57
    Phase: Capturing
    Message: Generating Offline Source Model Load Formatted File For SS2K_SYSOBJECTS.dat ,File Size: 5235
    Type: Information
    Time: 26-09-2006 15:13:57
    Phase: Capturing
    Message: Generated Offline Source Model Load File d:\DBClients\oracle\ora92\Omwb\offline_capture\SQLServer2K\itest\SS2K_SYSOBJECTS.XML
    Type: Information
    Time: 26-09-2006 15:14:27
    Phase: Creating
    Message: Mapping Tables
    Type: Information
    Time: 26-09-2006 15:14:27
    Phase: Creating
    Message: Mapped Tables
    Type: Summary
    Time: 26-09-2006 15:14:27
    Phase: Creating
    Message: Tables Mapped: 0, Tables NOT Mapped: 0
    By the way. After I try to create the Oracle Model, for each of the DAT files the XML file is created with the following content:
    <?xml version="1.0" encoding="Cp1251" ?><START><er></er></START>
    May be this will help to shed a light on the problem.
    Pavel

  • Bulk Data Storage for Travelling & Charging

    Good day readers
    I shall be travelling later this year and don't want to take lots of SDHC cards. Does anyone have any advice on what to take that stores lotss of Video files and Stills please. I want to keep bulk and weight down.
    I have looked at the Vosonic range but these seem expensive
    I have considered a small Netbook - a lot less than a Vosonic unit - 160gig to 250gig will be sufficient
    I want to be able to view the video files and stills
    I also want to be able to charge the units, there will be no mains electricicity where I am going, so solar charging ideas needed to the Camcorder batteries and the data storage device
    All Ideas welcome, what solutions have people used
    Thanks....

    cookie2402 wrote:
    Harm
    Thanks, but how do I upload the SDHC data to the Hard Drives. I want to upload the SDHC cards and store the data and re-use the SDHC card.
    Sorry if I wasn't clear
    John
    John, many netbooks have a slot for installing your SDHC card and can read it. Then use Harm's suggested USB port hard drive to off load the data.  Possibly even better use an external SSD drive as they take appreciably less power than a hard drive.
    Bill

  • Data storage for Master data.

    Hi
    Would like to know what is the limit in terms of number of records a master data table can hold or should hold.
    If there is a possibility that master data vol will increase very significantly then is it advisable to create an ODS to store the master data vol .....
    Please advise
    Kind Regards,
    Kate

    Dear Kate,
       There is not really any limit for this. Since this is dependent on the database storage space. and especially for master data, it wont exceed as you think.
    Thanks,
    Raj

  • Data storage for the T.codes

    hi , sap experts ,
    If we maintain the data for the following t.code's in which tables the data will be stored:
    T.codes
    po13
    ppoc_old
    ppoce
    ppsc
    Thanks in advance

    Hi Prasad,
    Go to T-code se16 go get all the objects in table HRP1000 and for objects relationships HRP1001
    Objects
    Organizational unit - O
    Job - C
    Position - S
    Person - P
    Task - T
    Cost Center - K
    Work Center - A
    Relationships
    organizational unit to organizational unit
    jobs to positions
    organizational unit to positions
    position to position
    position to persons
    Cheers
    Ratan

  • External data storage for iPad

    I am a student and just purcased an iPad (due to arrive any day now eek!) to be able to order my textbooks in e-book format.
    The issue I am running into is non wifi data back up.  I don't want to keep the books on the actual iPad as the files are huge, but to keep them on iCloud or something similar on line limits my access when my WiFi at work goes down.  I did not have the money for a larger memory and cannot afford the 3G due to being college poor.  So, while the ability to store my textbooks on iCloud is a great start, what if I'm working and the wifi goes out? (It does often and I'm constantly breaking into our server room to reset it for patients who want to use their tablets or e-readers, but it doesn't always work and our IT people are not there overnight which is when I work). I was told the camera connector kit would work to store data on a flash drive or SD card, however, it won't. I was also told about iFlashdrive, but the price is way out of my league and it is merely a flash drive anyway.  I can't justify that cost.  Is there a cost effective way to store my textbook files for when I can't access the internet, at all?  This just does not seem very user friendly to me at the moment.

    Keep your eyes open for a device called the "ZoomIt" or "ZoomIt Reader".  It looks and acts similar to the Apple Camera Connection Kit, except that it allows you to access much more of the content on the SD/SDHC cards.  I bought one to use it for loading data files onto the micro-SDHC card for my GPSr and it works great using my iPhone 4.  I have places pictures, videos and PDF files on my SDHC cards and accessed them from my iPhone, so it might be able to get you closer to what you want.  The only problem I've seen lately is that these devices seem to be a premium to come by.  I paid about $39 for mine when I bought it new on Amazon, but the last time I looked, every retailer with them in stock was asking closer to $100 for them.
    If your app offers "open with" and "export" funcitonality, you can probably use this device to copy content onto or off from your iPad.  I can't imagine trying to get by without mine now that I've had it so long...

  • Is it possible to increase data storage for 5s?

    I have had a 5s for about 4 months, having updated from an iPhone 4 with 32 GB. Unfortuantely I didn't think about the capacity when I got my  5s, and I don't have enough room for the latest update, 7.01. I am sure most of the capacity is taken up with photos, most of which are duplicated, in  Camera Roll & My Photo Stream.  Is it possible to add more GBs to my phone? I really don't want to have to buy another phone after only 4 months
    Thanks,
    Pat

    Thanks! I was afraid I couldn't add GBs but thought I'd ask. I'll try using iTunes on my computer &amp; see if there's enough space. I also need to coordinate photo storage between my iPhone, Mac &amp; iPad, I guess.

  • Power BI - Office 365 data storage for Excel (Power Pivot & Power Query) architecture

    Hi, I trying to find a comprehensive view of the Power BI architecture, specifically where the data for Power Pivot & Power Query files are held. I've seen reference to the Vertipaq Engine and the data being stored in the Excel File, however this is
    in reference to 2010.
    Other references I've seen (SharePoint on premise with Power Pivot) have shown me that the data held in the Excel files on the SharePoint sites are actually held in SQL Server instances.
    Could anyone provide clarity on the architecture for Power BI please...
    Tim
    Thank you for you time folks!

    Hi Tim,
    let my try to explain this:
    When you load data into Power Pivot it is actually loaded into Vertipaq (in-memory column-store) and it resides there until you close Excel. Technically speaking Power Pivot is a single-user SQL Server Analysis Services instance hosted within Excel. When you
    close Excel a backup of the Power Pivot "database" is created and stored within the resulting .xlsx-file .
    Any user that has access to the Excel file also has access to the Power Pivot model, so there is actually no security
    This is very similar to any other excel file
    Once the Excel is stored in SharePoint / SharePointOnline and is access the first time the backup that is stored within the Excel is restored into SQL Server Analysis Services instance running in SharePoint Integrate mode
    all connections to the Excel-File are redirected to that SSAS instance. After some time of inactivity the model is unloaded from that SSAS instance again to free up the memory
    the data itself so basically always resides inside Power Pivot or respectively the Excel file
    Power Query only stores the actual query (= M-script) but no data
    hth,
    gerhard
    Gerhard Brueckl
    blogging @ http://blog.gbrueckl.at
    working @ http://www.pmOne.com

  • Issue in data replication for one particular table

    Hi,
    We have implemented streams in out test environment and testing the business functionalities. We have an issue in data replication for only one custom table all other tables data replications are proper no issue. When we do 100 rows update data replication is not happening for that particular table.
    Issue to simulate
    Update one row -- Replication successful.
    100 rows update -- After 3-4 hrs nothing happened.
    Please let me know did any of you have come across similar issue.
    Thanks,
    Anand

    Extreme slowness on apply site are usually due to lock, library cache locks or too big segments in streams technical tables left after a failure during heavy insert. these tables are scanned with full table scan and scanning hundreds of time empty millions of empty blocks result in a very big loss of performance, but not in the extend your are describing. In your case it sound more like a form of lock.
    We need more info on this table : Lob segments? tablespace in ASSM?
    If the table is partitioned and do you have a job that perform drop partitions? most interesting is what are the system waits nd above all the apply server sessions waits. given the time frame, I would more looks after a lock or an library cache lock due to a drop partitions or concurrent updates. When you are performing the update, you may query 'DBA_DDL_LOCKS', 'DBA_KGLLOCK' and 'DBA_LOCK_INTERNAL' to check that you are not taken in a library cache lock.

  • File path of open data storage

    Hello all!
    Now I'm using the blocks of open data storage, write data and close data storage for storing and extracting result data. For the file path issue, before I
    set the data path by double clicking the "open data storage" block and inserting the file location in the indicated place, and that worked!
    Now since I made a stand alone application of this program and shall use it in other computers, the file location I inserted in open data storage block isn't
    valid any more in other PCs. So I modified my source code by connecting a "current vi path" to the open data storage block's file path node instead of
    inserting it inside the block, and this doesn't work! During running there shows an error in the write data block saying that the storage refnum isn't valid!
    I'm wondering why I couldn't specify the file path like this. Any way to allow me to specify the file path as the current vi path?
    Thanks!
    Chao
    Solved!
    Go to Solution.

    You need to account for the path changes when built in to an application, have a look at this example.
    https://decibel.ni.com/content/docs/DOC-4212
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Local data storage with XML vs SQLLite

    Short version:
    I have a mobile app that gathers and stores a large amount of data. The current XML solution (all in one file) is having performance issues as the volume of data increases beyond a certain point.
    Can I be confident that using SQLLite and a local database will be a better option and improve performance?
    Long version:
    I have built a mobile app that is used by my client to collect data.
    When we started the project the amount of data was relatively small. Given that, and given the past experiences of my team, we chose to use an XML file to store the collected data. When the app starts it loads the whole xml file into a set of data objects in memory for use by the app. The data is not loaded again for as long as the app stays running. We then run a routine whenever the app wants to save to the HD that converts all of the data back into XML.
    The data is split into 'projects' and each project naturally has its own xml node. To improve performance when saving, each instance of the Project class stores its loaded node string. Then, when the save routine is called, if the classes data has not been changed it simply returns the original string instead of going through the whole process of re-compiling it.
    Of course this does not change the fact that whenever a save is performed, the whole xml structure must be saved back to the file on the HD.
    The app has now been used in anger for quite some time and we are starting to get reports of performance issues. The main culprit is during loading which only happens once but can apparently now take an exeptionally long time and causes problems during startup to the point where users think it has crashed.
    I am now tasked with trying to improve the data storage for the app. My original reaction was to try breaking down the xml into multiple files which can then we loaded and saved when needed instead of handling everything all at once. I am worried though about the implications of the app trying to handle in excess of 2000 xml files in some cases. It may not be an issue at all but it just stands out to me as something to be concerned about.
    So the best other option that I am aware of is to use SQLLite to save all the data into a local database. I have very little personal experience with sql and databases though so while a lot of the documentation and blog posts I see seem to suggest that this is the way to go for mobile apps with a large amount of data, I cannot be quite certain. The big issue is that I cannot afford to sell the idea of such a radical change (and more importantly the time it would take to implement it) to my client without being quite sure that this will be a valid solution to the problem.
    So my question to you, Flex community, is would you continue to use XML or is SQLLite the better way to go for large amounts of data? Do you think that I would see an improvement by using SQLLite? Additionally, do you have any tips or experience with a similar situation that might be helpful to me?
    Thank you
    Jamie

    I've been very happy using the internal SQLite database with Flex.  You should definitely get much better performance from it.  Just being able to load your data asynchronously if nothing else you should get a great benefit.  The sheer amount of data in your XML file that is being passed around is probably quite a hit (especially on mobile).  Breaking up the XML shouldn't be too crazy to do (as you're already doing it somewhat, I'd imagine, when you're accessing the information from within your app).  Give switching to SQLite a try, but I'd say that's a great place to start.

  • Allocation of storage for OSx Lion

    How do I manage allocation of storage? OSx Lion comes with "free" 5GB iCloud storage but all is allocated to mail.  Want to change so I can use some for backups.
    Tried using system preferences, iCloud, manage but don't see anywhere to change the allocation.
    Any suggestions for backup using space on hard drive so I can retrieve later for restore.

    iCloud is not a general cloud storage solution. Rather it provides the cloud mechanism by which Apple syncs your mail, contacts, calendars, and bookmarks with your computers and iDevices. If you need general data storage for backup or other storage needs, then use a third-party service like SugarSync or ADrive.
    If you need system backup then buy an external hard drive for backup. See:
    Basic Backup
    For some people Time Machine will be more than adequate. Time Machine is part of OS X. There are two components: 1. A Time Machine preferences panel as part of System Preferences; and, 2. A Time Machine application located in the Applications folder. It is used to manage backups and to restore backups. Time Machine requires a backup drive that is at least twice the capacity of the drive being backed up.
    Alternatively, get an external drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    Carbon Copy Cloner
    Data Backup
    Deja Vu
    SuperDuper!
    Synk Pro
    Tri-Backup
    Visit The XLab FAQs and read the FAQ on backup and restore.  Also read How to Back Up and Restore Your Files.
    Although you can buy a complete external drive system, you can also put one together if you are so inclined.  It's relatively easy and only requires a Phillips head screwdriver (typically.)  You can purchase hard drives separately.  This gives you an opportunity to shop for the best prices on a hard drive of your choice.  Reliable brands include Seagate, Hitachi, Western Digital, Toshiba, and Fujitsu.  You can find reviews and benchmarks on many drives at Storage Review.
    Enclosures for FireWire and USB are readily available.  You can find only FireWire enclosures, only USB enclosures, and enclosures that feature multiple ports.  I would stress getting enclosures that use the Oxford chipsets especially for Firewire drives (911, 921, 922, for example.)  You can find enclosures at places such as;
    Cool Drives
    OWC
    WiebeTech
    Firewire Direct
    California Drives
    NewEgg
    All you need do is remove a case cover, mount the hard drive in the enclosure and connect the cables, then re-attach the case cover.  Usually the only tool required is a small or medium Phillips screwdriver.

Maybe you are looking for

  • Need help saving music

    i have an iphone 5 that was synced to a laptop that i no longer have and i didnt back up anything. i am getting a new iphone sent to replace the one i currently have, is there any way i can save the music and apps from the phone, so i can get them ba

  • Stereo tracks hard limited to -3db in Multitrack?

    Hey folks, strange thing I'm seeing here.  Hope you can help. I'm inserting stereo audio files into a track in Multitrack view.  The files are mastered songs with levels upto -0.03db....which I can see fully in the edit view.  However, when I insert

  • 10.4.7 Gray Install discs

    I bought the gray install disc on ebay to replaced my destroyed disc that I got when I bought my iMac Intel Core 2 Duo 1.83 GHz 512 MB 667 MHz DDR SDRAM 17" 160 GB HD combo Drive from late 2006, mine where destroyed by kids who thought they made grea

  • E2500 replacing a WRT54G does not see the internet

    I am trying to replace a (dying) WRT54G with an e2500. I've done the usual several times: - remove the old router - connnect the new router to the DSL modem - power up the 2500 - run linksys connect - power cycle the DSL modem The result is a network

  • WLC5508 and AP1041

    I have 5508 WLC and number of 1041 APs Some of the APs OPERATIONAL STATUS shows DOWN To test it I take that AP from its original location, bring to computer room on the same building and connect it to the same switch port via patch cable Its OPERATIO