Sample file data for bapi_salesorder_createfromdat2

Hi all,
I have written a zprogam which calls bapi_salesorder_createfromdat2  to create sales order.
While running it shows the error as
VP   Enter Ship-to-party or Sold-to-party
can anyone give me file format with real time data  for this bapi
Senthil

Hi ,
This is code have a look at it it help you to create a file.
LOOP AT lt_temp1 INTO wa_temp1.
   headerx-doc_type    = 'X'.
    header-sales_org    = wa_temp1-vkorg.
    headerx-sales_org   = 'X'.
    header-purch_no     = wa_temp1-bstnk.
    header-distr_chan   = wa_temp1-vtweg.
    headerx-distr_chan  = 'X'.
    header-division     = wa_temp1-spart.
    header-purch_no_s   = wa_temp1-bstnk.
    headerx-division    = 'X'.
    wa_partner-partn_role = 'AG'.
    wa_partner-partn_numb = wa_temp1-kunnr.
    APPEND wa_partner TO it_partner.
    wa_partner-partn_role = 'WE'.
    wa_partner-partn_numb = wa_temp1-kunnr.
    APPEND wa_partner TO it_partner.
    CLEAR: wa_partner.
    LOOP AT lt_temp2 INTO wa_temp2 WHERE kunnr = wa_temp1-kunnr.
      "AND matnr = wa_temp1-matnr.
*wa_item-itm_number = wa_temp2-posnr .
      wa_item-material   = wa_temp2-matnr.
      wa_item-plant      = wa_temp2-werks.
      wa_item-req_qty    = wa_temp2-fkimg * 1000.
      wa_item-target_qty = wa_temp2-fkimg * 1000.
      APPEND wa_item TO it_item.
    ENDLOOP.
    CALL FUNCTION 'BAPI_SALESORDER_CREATEFROMDAT1'
      EXPORTING
        order_header_in       = header
*   WITHOUT_COMMIT            = ' '
*   CONVERT_PARVW_AUART       = ' '
     IMPORTING
       salesdocument          = v_vbeln
*   SOLD_TO_PARTY             =
*   SHIP_TO_PARTY             =
*   BILLING_PARTY             =
       return                 = return
      TABLES
       order_items_in         = it_item
       order_partners         = it_partner.
    IF v_vbeln ne space.
      wa_vbeln-vbeln = v_vbeln.
      APPEND wa_vbeln TO lt_vbeln.
      CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
        EXPORTING
          wait = 'X'.
      HIDE wa_vbeln-vbeln.
      CLEAR: wa_partner,header,v_vbeln, wa_temp2.
      REFRESH: it_partner,it_item.
    ELSE.
*    LOOP AT return .
      it_error-srno = idx.
      it_error-err_msg = return-message.
      APPEND it_error.
*    ENDLOOP.
    ENDIF.
    idx = idx + 1.
  ENDLOOP.
ENDFORM.                    " CREAT_BAPI
with regards
nilesh

Similar Messages

  • I need a sample excel data for practicing Dashboards

    Hi Experts ,
                        I am new to SAP  BO  Dashboards. i need sample excel data for practicing and designing a dash boards . kindly help me to get sample excel files from any where .Please suggest me where i can get sample excel files.
    Regards
    Abhi

    Hi,
    open the samples in the dashboard which come with source data as excel.or search on google you will get the dashboard files with sample excel data.or try to create own sample data and use in dashboard for learning.
    Amit

  • Collecting samples of data for analysis

    I’m afraid I am quite new to labview so excuse the simple line of questioning. I am receiving values from a plc and I am looking to collect two different samples of data for analysis:
    I would like to collect 40 values and once collected, rotate the sample so the oldest value is replaced by the new one, but maintaining an array of 40 values. This is to calculate the rolling average of the latest 40 values for the duration of the while loop.
    Secondly, I would like calculate the average of all values collected for the duration of the while loop. This means the sample will keep growing for the duration of the while loop and I will need an array of increasing size to be analysed.
    I know the array functions can do this, however I am unable to figure out how. Any assistance or examples to help achieve this would be greatly appreciated.
    Best regards,
    Stuart Wilson

    Here is a quick (and dirty) way. I know that there are more elegant ways, can't look at them at the moment, but this may give you ideas.
    P.M.
    LabVIEW 7.0
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion
    Attachments:
    rotate values.vi ‏31 KB

  • Sample/Demo Data for Oracle 10g

    Is there such data available? if so, can you point me to it please? thanks!

    Depending on the install method you use, you may not have included 'the samples'.
    For 10g, the samples are
    1) included with the database install as a transposrtable tablespace and available IF you ask using DBCA at database create time;
    2) included on the Companion CD which is a separate download and install;
    3) Documented in the Oracle Samples manual that is in the 10g doc set (http://tahiti.oracle.com)
    In addition, many features and tutorials (http://otn.oracle.com/obe) have their own sample data set. Those would be downloaded from the feature portal, which you access by starting at http://otn.oracle.com and going to the Product or Technology portal (left edge menu) and then drilling to the specific features.
    For example, Discoverer 'end user' is a middleware product now well hidden in the BI Standard Edition. To get to the Discoverer Relational and Olap samples, use
    http://otn.oracle.com
    -> see Products in left side menu and click on Middleware
    -> scroll down to the Business Intelligence area (middle section) and click on 'Business Intelligence'
    -> scroll to the "Business Intelligence Foundation" area (middle section) and click on "Oracle BI Standard Edition"
    -> scroll down (middle area) and click on "Oracle BI Discoverer" to get to the "Oracle BI Discoverer" portal
    -> Samples and tutorials would be available through the links on the right side menu..
    (Now I have that figured out, Oracle will probably change the layout <g>)

  • Some help on sample data for BPC

    Hi
    I am trying to create a demo BPC apps( NW 7.5 ) for my self learning exercise.Can some one please help me with sample master data and sample transaction data for consolidation apps ?
    I am trying to load some good qualities of data ,so that some meaningful exercise can be done.Any help in this regard will be highly appreciated.I am  not looking for how to load the data.
    Thanks in advance
    Rajesh

    Hi Rajesh
    Have you run the program UJS_ACTIVATE_CONTENT to create the ApShell Appset?
    Additionally you can download the starter kits which should include some basic data to start you off.
    I hope this helps
    Kind Regards
    Daniel

  • Lock Up Your Data for Up to 90% Less Cost than On-Premises Solutions with NetApp AltaVault

    June 2015
    Explore
    Data-Protection Services from NetApp and Services-Certified Partners
    Whether delivered by NetApp or by our professional and support services certified partners, these services help you achieve optimal data protection on-premises and in the hybrid cloud. We can help you address your IT challenges for protecting data with services to plan, build, and run NetApp solutions.
    Plan Services—We help you create a roadmap for success by establishing a comprehensive data protection strategy for:
    Modernizing backup for migrating data from tape to cloud storage
    Recovering data quickly and easily in the cloud
    Optimizing archive and retention for cold data storage
    Meeting internal and external compliance regulations
    Build Services—We work with you to help you quickly derive business value from your solutions:
    Design a solution that meets your specific needs
    Implement the solution using proven best practices
    Integrate the solution into your environment
    Run Services—We help you optimize performance and reduce risk in your environment by:
    Maximizing availability
    Minimizing recovery time
    Supplying additional expertise to focus on data protection
    Rachel Dines
    Product Marketing, NetApp
    The question is no longer if, but when you'll move your backup-and-recovery storage to the cloud.
    As a genius IT pro, you know you can't afford to ignore cloud as a solution for your backup-and-recovery woes: exponential data growth, runaway costs, legacy systems that can't keep pace. Public or private clouds offer near-infinite scalability, deliver dramatic cost reductions and promise the unparalleled efficiency you need to compete in today's 24/7/365 marketplace.
    Moreover, an ESG study found that backup and archive rank first among workloads enterprises are moving to the cloud.
    Okay, fine. But as a prudent IT strategist, you demand airtight security and complete control over your data as well. Good thinking.
    Hybrid Cloud Strategies Are the Future
    Enterprises, large and small, are searching for the right blend of availability, security, and efficiency. The answer lies in achieving the perfect balance of on-premises, private cloud, and public services to match IT and business requirements.
    To realize the full benefits of a hybrid cloud strategy for backup and recovery operations, you need to manage the dynamic nature of the environment— seamlessly connecting public and private clouds—so you can move your data where and when you want with complete freedom.
    This begs the question of how to integrate these cloud resources into your existing environment. It's a daunting task. And, it's been a roadblock for companies seeking a simple, seamless, and secure entry point to cloud—until now.
    Enter the Game Changer: NetApp AltaVault
    NetApp® AltaVault® (formerly SteelStore) cloud-integrated storage is a genuine game changer. It's an enterprise-class appliance that lets you leverage public and private clouds with security and efficiency as part of your backup and recovery strategy.
    AltaVault integrates seamlessly with your existing backup software. It compresses, deduplicates, encrypts, and streams data to the cloud provider you choose. AltaVault intelligently caches recent backups locally while vaulting older versions to the cloud, allowing for rapid restores with off-site protection. This results in a cloud-economics–driven backup-and-recovery strategy with faster recovery, reduced data loss, ironclad security, and minimal management overhead.
    AltaVault delivers both enterprise-class data protection and up to 90% less cost than on-premises solutions. The solution is part of a rich NetApp data-protection portfolio that also includes SnapProtect®, SnapMIrror®, SnapVault®, NetApp Private Storage, Cloud ONTAP®, StorageGRID® Webscale, and MetroCluster®. Unmatched in the industry, this portfolio reinforces the data fabric and delivers value no one else can provide.
    Figure 1) NetApp AltaVault Cloud-integrated Storage Appliance.
    Source: NetApp, 2015
    The NetApp AltaVault Cloud-Integrated Storage Appliance
    Four Ways Your Peers Are Putting AltaVault to Work
    How is AltaVault helping companies revolutionize their backup operations? Here are four ways your peers are improving their backups with AltaVault:
    Killing Complexity. In a world of increasingly complicated backup and recovery solutions, financial services firm Spot Trading was pleased to find its AltaVault implementation extremely straightforward—after pointing their backup software at the appliance, "it just worked."
    Boosting Efficiency. Australian homebuilder Metricon struggled with its tape backup infrastructure and rapid data growth before it deployed AltaVault. Now the company has reclaimed 80% of the time employees formerly spent on backups—and saved significant funds in the process.
    Staying Flexible. Insurance broker Riggs, Counselman, Michaels & Downes feels good about using AltaVault as its first foray into public cloud because it isn't locked in to any one approach to cloud—public or private. The company knows any time it wants to make a change, it can.
    Ensuring Security. Engineering firm Wright Pierce understands that if you do your homework right, it can mean better security in the cloud. After doing its homework, the firm selected AltaVault to securely store backup data in the cloud.
    Three Flavors of AltaVault
    AltaVault lets you tap into cloud economics while preserving your investments in existing backup infrastructure, and meeting your backup and recovery service-level agreements. It's available in three form factors: physical, virtual, and cloud-based.
    1. AltaVault Physical Appliances
    AltaVault physical appliances are the industry's most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Companies deploy AltaVault physical appliances in the data center to protect large volumes of data. These datasets typically require the highest available levels of performance and scalability.
    AltaVault physical appliances are built on a scalable, efficient hardware platform that's optimized to reduce data footprints and rapidly stream data to the cloud.
    2. AltaVault Virtual Appliances for Microsoft Hyper-V and VMware vSphere
    AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They're also perfect for enterprises that want to safeguard branch offices and remote offices with the same level of protection they employ in the data center.
    AltaVault virtual appliances deliver the flexibility of deploying on heterogeneous hardware while providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto VMware vSphere or Microsoft Hyper-V hypervisors—so you can choose the hardware that works best for you.
    3. AltaVault Cloud-based Appliances for AWS and Microsoft Azure
    For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are key to enabling cloud-based recovery.
    On-premises AltaVault physical or virtual appliances seamlessly and securely back up your data to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance in AWS or Azure and recover data in the cloud. Usage-based, pay-as-you-go pricing means you pay only for what you use, when you use it.
    AltaVault solutions are a key element of the NetApp vision for a Data Fabric; they provide the confidence that—no matter where your data lives—you can control, integrate, move, secure, and consistently manage it.
    Figure 2) AltaVault integrates with existing storage and software to securely send data to any cloud.
    Source: NetApp, 2015
    Putting AltaVault to Work for You
    Four common use cases illustrate the different ways that AltaVault physical and virtual appliances are helping companies augment and improve their backup and archive strategies:
    Backup modernization and refresh. Many organizations still rely on tape, which increases their risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability. AltaVault serves as a tape replacement or as an update of old disk-based backup appliances and virtual tape libraries (VTLs).
    Adding cloud-integrated backup. AltaVault makes a lot of sense if you already have a robust disk-to-disk backup strategy, but want to incorporate a cloud option for long-term storage of backups or to send certain backup workloads to the cloud. AltaVault can augment your existing purpose-built backup appliance (PBBA) for a long-term cloud tier.
    Cold storage target. Companies want an inexpensive place to store large volumes of infrequently accessed file data for long periods of time. AltaVault works with CIFS and NFS protocols, and can send data to low-cost public or private storage for durable long-term retention.
    Archive storage target. AltaVault can provide an archive solution for database logs or a target for Symantec Enterprise Vault. The simple-to-use AltaVault management platform can allow database administrators to manage the protection of their own systems.
    We see two primary use cases for AltaVault cloud-based appliances, available in AWS and Azure clouds:
    Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, AltaVault cloud-based appliances are key to enabling cloud-based disaster recovery. Via on-premises AltaVault physical or virtual appliances, data is seamlessly and securely protected in the cloud.
    Protect cloud-based workloads.  AltaVault cloud-based appliances offer an efficient and secure approach to backing up production workloads already running in the public cloud. Using your existing backup software, AltaVault deduplicates, encrypts, and rapidly migrates data to low-cost cloud storage for long-term retention.
    The benefits of cloud—infinite, flexible, and inexpensive storage and compute—are becoming too great to ignore. AltaVault delivers an efficient, secure alternative or addition to your current storage backup solution. Learn more about the benefits of AltaVault and how it can give your company the competitive edge you need in today's hyper-paced marketplace.
    Rachel Dines is a product marketing manager for NetApp where she leads the marketing efforts for AltaVault, the company's cloud-integrated storage solution. Previously, Rachel was an industry analyst for Forrester Research, covering resiliency, backup, and cloud. Her research has paved the way for cloud-based resiliency and next-generation backup strategies.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    You didn't say what phone you have - but you can set it to update and backup and sync over wifi only - I'm betting that those things are happening "automatically" using your cellular connection rather than wifi.
    I sync my email automatically when I have a wifi connection, but I can sync manually if I need to.  Downloads happen for me only on wifi, photo and video backup are only over wifi, app updates are only over wifi....check your settings.  Another recent gotcha is Facebook and videos.  LOTS of people are posting videos on Facebook and they automatically download and play UNLESS you turn them off.  That can eat up your data in a hurry if you are on FB regularly.

  • Batch change file date to match exif date

    Hi All-
    I recently got a mac pro. Upon importing my photos to iPhoto, the file date for each is set to the date of the import. When using the mac photo screensaver, the date that shows up for everything is the import date, rather than the date the photo was taken.
    Is there any way to batch change the pictures so that the file date matches the date the picture was taken?
    (Or, a way to have the screensaver use that date
    Thanks,
    -jamie

    Interesting, but here is what I found out. I was annoyed because when I sucked in about 1000 JPGs from my camera's memory stick, and then I copied them to my Windows boxen, the file dates were wrong. I was surprised that there isn't an easy way to do this... until your post!! Thanks. *APPLE, Please fix the way "Originals" are stored during iPhoto import*, since something is wrong here. The file dates stored on my memory stick should have been used not the moment I clicked import in iPhoto.
    So I moved my jhead to /usr/bin and gave it root ownership and 755 permissions.
    All of the pictures seem to be three deep here:
    cd ~/Pictures/iPhoto\ Library/Originals
    ls -lsGrt /*/.jpg
    But that is only a small fraction of mine. Most end in .JPG
    ls -lsGrt /*/.JPG
    So here is the commands to do the correction of the file date and time. You don't need the find command or that script, although you might need to snoop around and make sure all of your files are here and not any deeper or in some other location. but this should be harmless to run as long as you want to alter every file date.
    jhead -ft /*/.jpg
    jhead -ft /*/.JPG
    And that pretty much does it. I don't see any images elsewhere that were brought in by iPhoto 6 on my machine. I looked for MPG files but I don't have any. This is from a video recording camera, so I don't have any MPGs.
    So jrsmobile, my mileage varied a lot, since your method missed about 95% of my photos!!

  • Splitting flat file data

    I want to store the flat file data for temporary purpose and as a backup also, but its size is so huge that processing them at one instance is a difficult task. Is there any 'Z' program that will split the records puts it in a new file. Anyone please give an idea and your answer ll b rewarded with maximum points if it helps me on this issue.

    Hi Manjula,
    Check out the below program this will help you solve your requirement, you split the records as per the limit specified and puts them in a new file. Think so it help you.
    REPORT zc1_split_file MESSAGE-ID ztestmsg.
    TABLES: mara.
    DATA: BEGIN OF input,
    mandt LIKE mara-mandt,
    matnr LIKE mara-matnr,
    ersda LIKE mara-ersda,
    ernam LIKE mara-ernam,
    matkl LIKE mara-matkl,
    END OF input.
    DATA: i_mara_tab LIKE TABLE OF input WITH HEADER LINE,
    i_mara_temp LIKE TABLE OF input,
    w_mara_tab LIKE LINE OF i_mara_tab,
    v_newfile(120) TYPE c,
    v_no_lines TYPE i,
    v_split(4) TYPE n VALUE 1,
    v_count(4) TYPE n VALUE 1.
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    PARAMETERS: p_file TYPE rlgrap-filename MEMORY ID file,
    p_count(4) TYPE n.
    SELECTION-SCREEN END OF BLOCK b1.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
    Diplaying the dialog box for opening the file ************
    CALL FUNCTION 'WS_FILENAME_GET'
    EXPORTING
    mask = '.,..'
    IMPORTING
    filename = p_file.
    START-OF-SELECTION.
    IF p_file IS INITIAL. " check to ensure file.
    MESSAGE i001.
    EXIT.
    ENDIF.
    IF p_count IS INITIAL. " check to ensure file.
    MESSAGE i002.
    EXIT.
    ENDIF.
    CALL FUNCTION 'WS_UPLOAD'
    EXPORTING
    filename = p_file
    filetype = 'DAT'
    TABLES
    data_tab = i_mara_tab
    EXCEPTIONS
    conversion_error = 1
    file_open_error = 2
    file_read_error = 3
    invalid_type = 4
    no_batch = 5
    unknown_error = 6
    invalid_table_width = 7
    gui_refuse_filetransfer = 8
    customer_error = 9
    OTHERS = 10.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    LOOP AT i_mara_tab INTO w_mara_tab.
    IF v_count < p_count.
    APPEND w_mara_tab TO i_mara_temp.
    v_count = v_count + 1.
    ELSE.
    APPEND w_mara_tab TO i_mara_temp.
    v_count = v_count + 1.
    PERFORM split_records.
    REFRESH i_mara_temp.
    v_count = 1.
    v_split = v_split + 1.
    ENDIF.
    ENDLOOP.
    IF v_count NE 1.
    PERFORM split_records.
    ENDIF.
    --> p1 text
    <-- p2 text
    FORM split_records.
    CONCATENATE p_file v_split INTO v_newfile.
    CALL FUNCTION 'WS_DOWNLOAD'
    EXPORTING
    filename = v_newfile
    filetype = 'DAT'
    mode = 'A'
    TABLES
    data_tab = i_mara_temp
    EXCEPTIONS
    file_open_error = 1
    file_write_error = 2
    invalid_filesize = 3
    invalid_type = 4
    no_batch = 5
    unknown_error = 6
    invalid_table_width = 7
    gui_refuse_filetransfer = 8
    customer_error = 9
    no_authority = 10
    OTHERS = 11.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    WRITE:/'Data has been written into:' , v_newfile.
    ENDFORM.
    **Reward points if found useful....
    Regards,
    Manikandan.A

  • To read COMTRADE file in Labview there is an example provided. Can somebody provide the sample .cfg and .dat files required for its working?

    To read COMTRADE file in Labview there is an example provided. Can somebody provide the sample .cfg and .dat files required for its working?

    Thanks for the reply.
    But this library doesn't contain any sample .cfg and .dat files which can be read and understood. Can you please provide the same?

  • Sample file for TBDM (market data import) required

    Dear Experts,
    Please help me with sample file or file format with some dummy data to run the transaction TBDM for currecny rates upload.
    i tried witl all possiblities some or the other error comes.
    Appreicate you <removed by moderator> help.
    Bharathi.J
    Edited by: Thomas Zloch on Apr 20, 2011 1:00 PM - priority adjusted

    Hi,
    This question is very old but when you look for this in Google this is the first page that comes up.
    To make your own uploadable table in Excel you need to follow this structure:
    Required means that you must fill that field and Empty means that should leave it blank.
    The length describes the length of the field. It's very important the length of the field because if you don't use that length then it won't work the upload.
    1 Data Class (Fixed value '01') - Length 2 - Required
    2 Key 1 FROM currency - Length 20 - Required
    3 Key 2 TO currency - Length 20 - Required
    4 Category Interest type - Length 15 - Required
    5 Date Calculation Date (Format DDMMYYYY) - Length 8 - Required
    6 Time Calculation Time (Format HHMMSS) - Length 6 - Empty
    7 Value Value of Data - Length 20 - Required (Use dot for decimals not commas)
    8 Currency Not applicable - Length 20 - Empty
    9 FROM Ratio Translation Ratio from - Length 7 - Required
    10 TO Ratio Translation Ratio to - Length 7 - Required
    11 Other Not applicable - Length 5 - Empty
    12 Status Error status (values 50..99)  - Length 2 - Empty
    13 Error message Error message - Length 80 - Empty
    In excel you can define the column width with right click in the column. Doing that define every column with the previous length exposed. Fill the required fields.
    You should use define every field in "Text" format because numbers like the first one: 01, must be 01 and it won't work with just 1. The same with date, if you have a date like 5012015 it won't work and has to be 8 digits like 05012015 (05.01.2015)
    After you define every column width and define the required fields. Remember to delete the headers if you used them.
    Save the file as .XLS because if you save it instead as .prn you won't be able to use it again. Finally save the file as .prn (Formated Text Space Delimited) and upload TBDM. Every time you need to upload again open the XLS file, put your data and save as .prn
    And that's it
    Message was edited by: Pablo Romo

  • Generate Sample data for the cube .- I don't want to use the flat file

    Hello BI Gurus,
    I know there is a sap sap standard program where you write records to the cube. I think first you change the cube from basisc to real time cube and then another program which displays a ALV layout- you can create,delete or change ther records.
    I don't want to create a flat file load for this. I want to create 5 records to check the layout.
    I forgot the program name.
    Can you please let me know the program name.
    Thanks
    Senthil

    A long time for a simple question, so here it is: the program name is CUBE_SAMPLE_CREATE.
    Please, mark this question as anweared.
    Regards,
    André

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • Informatica - import data from line 6 in sample files in universal adapter

    Hi ,
    I am trying to extract data from R12 tables to sample CSV files provided by oracle in universal adapter CRM analytics.
    As per oracle guidelines we are supposed to import data from line 6 as first five lines will be skipped during ETL process.
    While importing the the target file in target designer ,I am entering 6 as value in the Lov box "Start import at Row".
    Still Data is loaded from first line of the file and not 6 th line.
    Please let me know more on how to achieve this requirement.
    Thanks,
    Chaitanya.

    HI,
    Please let me know the solution for this.It is very high priorirty for me now.
    I want to extract data into sample files provided by oracle starting from 6 th line.
    At present I am able to load from first line of the .csv file.
    Thanks,
    Chaitanya

  • Save multiple sample rate data to TDM file

    Hello, LV connoisseurs
    I use 2 Multifunction boards and LV 7.1 to gather slow and fast data simultaneously, 'slow' being 10 temperatures at 150 Hz, 'fast' being 12 pressures at 15 kHz, rate factor between slow and fast is constant 100. My acquisition is set to 'continuous' with blocks of 15 and 1500 resp., so each sample set takes 0.1 sec.
    Currently, I use 2 loops, one for each board. Slow data are written to .lvm, fast to .tdm, and this works fine.
    But I wonder if in this configuration it might be possible to
    - use one loop only (yes, trivial) with the main target being to
    - write data into two channel groups of one .tdm-file, one for the 'fast' the other for the 'slow' data?
    If at all possible, would this require the consumer/supplier scheme such as to allow the interspersing of data or can I do this directly?
    Thank You for your input.
    Michael

    Just to make the distinction, you should probably be using TDMS (the S stands for streaming) instead of TDM if you are continuously writing data. TDM is more for writing a snap-shot and doesn't work so well for continuous data (big memory leaks last time I used it back in '06).
    Also, as no time data is stored you probably also want separate timestamp channels then for your fast and slow data.
    As mentioned, it is no problem having multiple writing loops using the same TDMS reference.
    nrp
    CLA

  • I need sample basic data file containing Product,Market,Measures etc

    I need sample basic data file containing Product,Market,Measures etc to load data in to Sample Basic applications. Where can I get this?

    As I am the World Domain Lead for Sample.Basic (this is a joke, btw, it's sort of like being King of my front stoop, and besides, I just made it up) I will note two things about CALCDAT.TXT.
    1) It is not in columnar format, but instead in Essbase free-form data load format as user2571802 notes. This means if you take a look at it with a text editor, it's a bit confusing to read till you realize that it contains all dimensional information in the data file itself. See the DBAG's section on [Data Sources That Do Not Need A Rules File|http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_dbag/ddlintro.htm#ddlintro1029529].
    2) CALCDAT.TXT contains level 0 and calculated results. Just load it -- there's no need to calculate it.
    Regards,
    Cameron Lackpour

Maybe you are looking for