Flexible upload of profitcenter nodes from CSV is restructured after succes

Hi;
I'm having a problem with using flexible upload loading data center nodes into an existing structure. The result is that Inside that profit center hierarchy a node 1000 is created in which the nodes specified in the CSV file are created. It is found that the hierarchy structure is not maintained as the newly created nodes are created under earchother.
Based on the idea that a hierarchy has been created as a profit center group with exactly the same structure it is not understoud why it is not possible to do this for the profit center. Additionally it demonstrates that the flexible upload function is understoud and is configured correctly. As well the file is properly setup.
This website was used http://help.sap.com/saphelp_sem60/helpdata/en/62/f7e73ac6e7ec28e10000000a114084/frameset.htm
I'm looking to understand why it is that the nodes are created differently for a profit center group then a profit center, It is suggested that the problem lies in the fact that the profit center was created and is under a specific configuration that does not allow me to add nodes to it.
THis is the structure:
H
--N1
   --N2
      -- The node that is added here and one level below is placed under "N3 1000"
--N3 1000
   --The node is added here
   --The child is added as a sibling
It was noticed that N3 is labeled as 1000 which is the controlled area. When refering to the log it has been found that the CO area was automatically populated with 1000 dispite this not having been done in the CSV file. Therefore it is assumed that there is a relationship between the nodes being added to the 1000 node and the controlled area.
Any suggestions are welcome. Would it be required to provide more detail with regards to the problem then please state this.

Hi,
IMHO, the culprit is a CO area set as an external attrbute int the hierarchy of PC and PCG.
AFAIR, CO area might/must be a linked attribute to PC (see the last tabstrip in PC infoobject screen) and that's why it should be fixed in the ConsArea settings (and populated aotomatically durung the data load).
If I were you I would try the following:
- ask the basis guys make a backup copy of the system
- delete CO Area from the external char of PC & PCG hierarchies
- set CO Area as a linked attribute to PC & PCG (if it is really needed - very often it's not needed if CO area has just the only value (which is fixed in the ConsArea settings)). => NB => These changes are very significant for BW and it's not always possible to done without data deletion.
- regenerate the BCS data basis (and probably the ConsArea).
The 1st step is needed because SEM-BCS very often does not regenerate properly the data basis (it simply doesn't see the changes). In the forum topics I several times explained how to force the system to see these changes (just drag and drop any role in the definiton of the BCS data basis and the save it).
This is (significant change in underlying properties of BW infoobjects) the main painpoint in changes BW for SEM-BCS. In my practice, unfortunately,  it required not less than several attempts (even with dumps). And unfortunately it may require the full data deletion in the cubes which play role in the data basis. It might be the main charge for the bad desifn of the BW structures for SEM-BCS.
Hopefully, you'll avoid it.
Good luck!
Linked attribute = compound attribute.
Edited by: Eugene Khusainov on Jan 31, 2011 4:25 AM

Similar Messages

  • Uploading/Downloading table to/from *.csv - file

    Hi all.
    First I need to upload this internal table (actually it is a copy of database table) to a file *.csv, and then to be able to download the table back from it.
    All this should be done using field symbols and methods GUI_UPLOAD, GUI_DOWNLOAD from class CL_GUI_FRONTEND_SERVICES.
    *-- STRUCTURE OF INTERNAL TABLE
    TYPES: BEGIN OF in_tab,
            mandt TYPE zng_so_head-mandt,
            so_num TYPE zng_so_head-so_num,          "type numc
            vend_num TYPE zng_so_head-vend_num,      "type numc
            cust_num TYPE zng_so_head-cust_num,      "type numc
            so_date TYPE zng_so_head-so_date,        "type dats
           END OF in_tab.
    *-- INTERNAL TABLE HOLDING LIST DATA
    DATA res_tab TYPE TABLE OF in_tab WITH HEADER LINE.
    START-OF-SELECTION.
    SELECT h~mandt h~so_num h~vend_num h~cust_num h~so_date
    INTO TABLE res_tab FROM zng_so_head AS h.
    thanks all.
    Message was edited by:
            nikolai gurlenia

    Hi,
    I hope following code will solve your problem.
    DATA : it_itab  TYPE TABLE OF string WITH HEADER LINE,
           v_file1  TYPE rlgrap-filename,
           v_file2  TYPE string.
    CALL FUNCTION 'F4_FILENAME'
      IMPORTING
        file_name = v_file1.
    v_file2 = v_file1.
    CALL METHOD cl_gui_frontend_services=>gui_upload
      EXPORTING
        filename                = v_file2
      CHANGING
        data_tab                = it_itab[]
      EXCEPTIONS
        file_open_error         = 1
        file_read_error         = 2
        no_batch                = 3
        gui_refuse_filetransfer = 4
        invalid_type            = 5
        no_authority            = 6
        unknown_error           = 7
        bad_data_format         = 8
        header_not_allowed      = 9
        separator_not_allowed   = 10
        header_too_long         = 11
        unknown_dp_error        = 12
        access_denied           = 13
        dp_out_of_memory        = 14
        disk_full               = 15
        dp_timeout              = 16
        not_supported_by_gui    = 17
        error_no_gui            = 18
        OTHERS                  = 19.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                 WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    IF it_itab[] IS NOT INITIAL.
      CALL METHOD cl_gui_frontend_services=>gui_download
        EXPORTING
          filename                = v_file2
        CHANGING
          data_tab                = it_itab[]
        EXCEPTIONS
          file_write_error        = 1
          no_batch                = 2
          gui_refuse_filetransfer = 3
          invalid_type            = 4
          no_authority            = 5
          unknown_error           = 6
          header_not_allowed      = 7
          separator_not_allowed   = 8
          filesize_not_allowed    = 9
          header_too_long         = 10
          dp_error_create         = 11
          dp_error_send           = 12
          dp_error_write          = 13
          unknown_dp_error        = 14
          access_denied           = 15
          dp_out_of_memory        = 16
          disk_full               = 17
          dp_timeout              = 18
          file_not_found          = 19
          dataprovider_exception  = 20
          control_flush_error     = 21
          not_supported_by_gui    = 22
          error_no_gui            = 23
          OTHERS                  = 24.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    ENDIF.
    Reward points if the answer is helpful.
    Regards,
    Mukul

  • Random Records are being skipped while uploading data in PSA from CSV File

    Hi Experts,
    I am facing issue in data uploading in PSA through CSV file, Random Records are being skipped while uploading data in PSA.
    First Load
    We have flat file (.txt in CSV format), which contains 380240 Records.
    We are uploading the flat file data into PSA from Application Server. But it uploads Only 380235 records, 5 Records are being skipped.
    Second Load
    We have re-generated same file next day, which contains same No of Records (380240), but this time it uploads Only 380233 records, 7 Records are being skipped.
    We found 1 skipped record (based on key columns combination, by cross verifying from source and PSA table) from First load. But same records (combination of key column) is available in second load. It means same records are not being skipped every time.
    Earlier (5 months ago) , We have loaded 641190 Records from flat file in same PSA and all records (641190) were uploaded successfully.
    There is no change is Source, PSA and flat file structure.
    Thanks & Regards
    Bijendra

    Hi Bijendra,
        Please check in the file if at the begining if it has got any excape sign then that record may be skipped so the records may be mssing
    Please check the excape sign like ; if they are present at the beginign that recor entirely will be skipped.
    Regards
    vamsi

  • I can't upload photos to facebook from the photo library after upgrading to iOS 8

    AFter updating ios8 i can't upload photos from photo library to fcebook..even i can't upload photos with safari browser...what can i do now??

    It is peculiar, that Chrome, Safari, and Skype can access FB, but not the iPhoto uploader.
    Do you see any error messages/ diagnostics in the Console Window, when you try to connect to Facebook?
    Launch a Console window from Applications > Utilities and clear the Console Window. Then try to upload. Are there any new messages?
    And also launch a Terminal and have a look, if "facebook.com" is properly resolved:
    Type 
    ping facebook.com
    into the window. Do you see any transmissions? What is the IP address used?
    I see:
    PING facebook.com (173.252.110.27): 56 data bytes
    64 bytes from 173.252.110.27: icmp_seq=0 ttl=243 time=110.486 ms
    64 bytes from 173.252.110.27: icmp_seq=1 ttl=243 time=109.365 ms
    64 bytes from 173.252.110.27: icmp_seq=2 ttl=243 time=110.101 ms
    64 bytes from 173.252.110.27: icmp_seq=3 ttl=242 time=109.829 ms
    64 bytes from 173.252.110.27: icmp_seq=4 ttl=242 time=111.323 ms
    64 bytes from 173.252.110.27: icmp_seq=5 ttl=242 time=110.346 ms
    64 bytes from 173.252.110.27: icmp_seq=6 ttl=242 time=110.708 ms
    64 bytes from 173.252.110.27: icmp_seq=7 ttl=242 time=112.685 ms
    64 bytes from 173.252.110.27: icmp_seq=8 ttl=243 time=124.256 ms
    64 bytes from 173.252.110.27: icmp_seq=9 ttl=243 time=112.106 ms
    ^C
    --- facebook.com ping statistics ---
    10 packets transmitted, 10 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 109.365/112.120/124.256/4.159 ms
    Try the same with "dig":
    dig facebook.com
    ; <<>> DiG 9.8.3-P1 <<>> facebook.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25051
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
    ;; QUESTION SECTION:
    ;facebook.com.                              IN          A
    ;; ANSWER SECTION:
    facebook.com.                    775          IN          A 173.252.110.27
    ;; Query time: 11 msec
    can you "ping" facebook?

  • Master Data Flexible Upload from Application Server?

    Hi Group,
    Anyone know if it's possible to do a flexible upload of master data from a flat file on the application server?
    I'd like to upload FS items and hierarchies from the BCS app server into our development environment, then transport to QA & PROD.  We would obviously need some way to "save" after the upload was complete.
    In the workbench, I can right click--> execute on the flex upload method and get a pop-up for a workstation file.  In a data collection method, I can specify a logical file & filename, but I cannot choose master data (which might have allowed me to run a data collection method via the workbench).
    Anyone accomplished this before?  Or have any ideas if/how this is possible?
    Thanks,
    - Chris

    Hi Christopher,
    It is not possible to assign Flexible Upload Method with Master Data to be assigned to Data Collection Method.
    Flexible Upload with Master Data should be executed independently from Workbench.  This is system design.
    Regards
    Narayana Murty

  • Upload data from excel:can we upload from .csv only?

    Hi Experts,
    Can we upload data from .CSV file only? Is this a limitation in ABAP? What if i have an excel sheet with multiple worksheets and I want to pick data from one of the worksheet. Is this scenario possible?
    Thanks and Regards,
    Rohit

    Hi Rohit.
    In CRM , it works somewhat differently. Following code snippet will help u.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_infile.
      CALL FUNCTION 'WS_FILENAME_GET'
        EXPORTING
          DEF_FILENAME     = 'p_infile'
          DEF_PATH         = ' '
          MASK             = '*.txt'
          MODE             = '0'
          TITLE            = 'UPLOAD TAB DELIMITED FILE'(078)
        IMPORTING
          FILENAME         = p_infile
    *     RC               =
        EXCEPTIONS
          INV_WINSYS       = 1
          NO_BATCH         = 2
          SELECTION_CANCEL = 3
          SELECTION_ERROR  = 4
          OTHERS           = 5.
      IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    *START-OF-SELECTION
    START-OF-SELECTION.
      gd_file = p_infile.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          FILENAME                    = gd_file
    *   FILETYPE                      = 'ASC'
          HAS_FIELD_SEPARATOR         = 'X'
    *   HEADER_LENGTH                 = 0
    *   READ_BY_LINE                  = 'X'
    *   DAT_MODE                      = ' '
    *   CODEPAGE                      = ' '
    *   IGNORE_CERR                   = ABAP_TRUE
    *   REPLACEMENT                   = '#'
    *   CHECK_BOM                     = ' '
    *   VIRUS_SCAN_PROFILE            =
    *   NO_AUTH_CHECK                 = ' '
    * IMPORTING
    *   FILELENGTH                    =
    *   HEADER                        =
        TABLES
          DATA_TAB                   = it_record
    EXCEPTIONS
       FILE_OPEN_ERROR               = 1
       FILE_READ_ERROR               = 2
       NO_BATCH                      = 3
       GUI_REFUSE_FILETRANSFER       = 4
       INVALID_TYPE                  = 5
       NO_AUTHORITY                  = 6
       UNKNOWN_ERROR                 = 7
       BAD_DATA_FORMAT               = 8
       HEADER_NOT_ALLOWED            = 9
       SEPARATOR_NOT_ALLOWED         = 10
       HEADER_TOO_LONG               = 11
       UNKNOWN_DP_ERROR              = 12
       ACCESS_DENIED                 = 13
       DP_OUT_OF_MEMORY              = 14
       DISK_FULL                     = 15
       DP_TIMEOUT                    = 16
       OTHERS                        = 17
      IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        write: 'Error' , sy-subrc .
        skip.
      ENDIF.

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • Initial load of inventory level from csv - double datarows in query

    Hello everybody,
    a query result shown in a web browser seems strange to me and I would be very glad, if anyone can give me some advice how to solve the problem. As I do not think that it is related to the query, I posted it into this forum.
    The query refers to an InfoCube for inventory management with a single non-cumulative key figure and two other cumulative key figures for increase and decrease of inventory. The time reference characteristic is 0CALDAY. The initial load has been processed reading from a flat file (CSV), the structure looks like this:
    Product group     XXX
    Day               20040101
    Quantity          1000
    Increase          0
    Decrease          0
    Unit               ST
    The initial load runs fine, the system fills all the record sets into the InfoCube. Unfortunately I do not know how to look at the records written into the cube, because only the cumulative key figures are shown in InfoCube-> Manage-> Contents.
    Well, when executing the query, a really simple one, the result is just strange, because somehow there are now two rows for each product group with different dates, one with the 1st of January, 2004 and the other for the 31st of December, 2003 containing both 1000 units. The sum is 2000.
    It became more confusing, when I loaded the data for increase and decrease: now the quantities and sums      are correct, but the date of the initial load is a few days later than before, the data table in the query does not contain the 1st of January.
    Does anybody know, what I did wrong or where there is information about how to perform an initial load of inventory from csv in a better way?
    Kind regards
    Peter

    Peter,
    Inventory is not that straight forward to evaluate as it is non-cumulative. Basically it means that one KF is derived from one/two other KFs. You cannot see non-cumulative KFs in manage infocube.
    Have you uploaded opening balances separately? If so, your data for 31st of december is explained.
    In non-cumulative cubes, there need not be a posting for a particular day for a record to exist. For e.g. if you have stock as 10 units on 1st and then no posting for 2nd and 3rd and then increase 10 units on 4th, even for 2nd and 3rd, the non-cumulative KF will report as 10 units (stock on 1st rolled forward).
    There is a how to...inventory management document on service market place that explains this quite nicely.
    Cheers
    Aneesh

  • How to upload Unicode encoding files from web?

    Hi everyone,
    I do not manage to upload Unicode encoding CSV files from web. Currently I use class CL_HTMLB_MANAGER to upload file from web. It works fine with ANSI encoding files, but file content is not uploaded correctly with Unicode encoding files. Especially I get innumerable characters u201C#u201D throughout the string that contains the file content (For example instead of u201CSAP CATALOG CSV 2.0u201D I get u201CÿþS#A#P# #C#A#T#A#L#O#G# #C#S#V# #2#.#0#u201D).
    I did not find in forums a solution to my issue that is why I am now asking for your help.
    How can I upload Unicode encoding files from web? Do you know another way to upload file from web that is Unicode compatible?
    Remark: I tried to upload Unicode encoding files from SAP GUI using function module GUI_UPLOAD and upload is successful.
    Here is the code that I currently used.
    DATA:     lr_event_ex     TYPE REF TO if_htmlb_data,
               fileupload      TYPE REF TO cl_htmlb_fileupload,
               lr_upload_model TYPE REF TO /ccm/cl_bsp_upload_model,
               lr_error        TYPE REF TO /ccm/cx_file_upload.
    lr_event_ex =  cl_htmlb_manager=>get_event_ex( runtime->server->request ).
    IF lr_event_ex->event_name = 'fileUpload' AND lr_event_ex->event_type = 'upload'.
      fileupload ?= lr_event_ex.
      FREE lr_event_ex.
    * get the model
      lr_upload_model ?= me->get_model( model_id = 'mupl' ).
      IF NOT fileupload->file_name IS INITIAL.
    *   upload data
        TRY.
            CALL METHOD lr_upload_model->upload_data
              EXPORTING
                iv_file_name = fileupload->file_name
              CHANGING
                cv_xcontent  = fileupload->file_content.
          CATCH /ccm/cx_file_upload INTO lr_error.
        ENDTRY.
        FREE fileupload.
      ENDIF.
    ENDIF.
    Thank you in advance for helping me.
    Best regards,
    Vanessa

    Hi There,
    Please check the details for the same.
    Link: http://helpx.adobe.com/creative-cloud/help/sync-files.html#Sync or upload files
    Troubleshoot sync:     
    Error: "Unable to sync files"
    Creative Cloud File Sync | Known issues
    Thanks,
    Atul Saini

  • SQL bulk copy from csv file - Encoding

    Hi Experts
    This is the first time I am creating a PowerShell script and it is almost working. I just have some problems with the actual bulk import to SQL encoding from the text file since it replaces
    special characters with a question mark. I have set the encoding when creating the csv file but that does not seem to reflect on the actual bulk import. I have tried difference scenarios with the encoding part but I cannot find the proper solution for that.
    To shortly outline what the script does:
    Connect to Active Directory fetching all user - but excluding users in specific OU's
    Export all users to a csv in unicode encoding
    Strip double quote text identifiers (if there is another way of handling that it will be much appreciated)
    Clear all records temporary SQL table
    Import records from csv file to temporary SQL table (this is where the encoding is wrong)
    Update existing records in another table based on the records in the temporary table and insert new record if not found.
    The script looks as the following (any suggestions for optimizing the script are very welcome):
    # CSV file variables
    $path = Split-Path -parent "C:\Temp\ExportADUsers\*.*"
    $filename = "AD_Users.csv"
    $csvfile = $path + "\" + $filename
    $csvdelimiter = ";"
    $firstRowColumns = $true
    # Active Directory variables
    $searchbase = "OU=Users,DC=fabrikam,DC=com"
    $ADServer = 'DC01'
    # Database variables
    $sqlserver = "DB02"
    $database = "My Database"
    $table = "tblADimport"
    $tableEmployee = "tblEmployees"
    # Initialize
    Write-Host "Script started..."
    $elapsed = [System.Diagnostics.Stopwatch]::StartNew()
    # GET DATA FROM ACTIVE DIRECTORY
    # Import the ActiveDirectory Module
    Import-Module ActiveDirectory
    # Get all AD users not in specified OU's
    Write-Host "Retrieving users from Active Directory..."
    $AllADUsers = Get-ADUser -server $ADServer `
    -searchbase $searchbase -Filter * -Properties * |
    ?{$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com' `
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'}
    Write-Host "Users retrieved in $($elapsed.Elapsed.ToString())."
    # Define labels and get specific user fields
    Write-Host "Generating CSV file..."
    $AllADUsers |
    Select-Object @{Label = "UNID";Expression = {$_.objectGuid}},
    @{Label = "FirstName";Expression = {$_.GivenName}},
    @{Label = "LastName";Expression = {$_.sn}},
    @{Label = "EmployeeNo";Expression = {$_.EmployeeID}} |
    # Export CSV file and remove text qualifiers
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    Write-Host "Removing text qualifiers..."
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Write-Host "CSV file created in $($elapsed.Elapsed.ToString())."
    # DATABASE IMPORT
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data")
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
    $batchsize = 50000
    # Delete all records in AD import table
    Write-Host "Clearing records in AD import table..."
    Invoke-Sqlcmd -Query "DELETE FROM $table" -Database $database -ServerInstance $sqlserver
    # Build the sqlbulkcopy connection, and set the timeout to infinite
    $connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
    $bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
    $bulkcopy.DestinationTableName = $table
    $bulkcopy.bulkcopyTimeout = 0
    $bulkcopy.batchsize = $batchsize
    # Create the datatable and autogenerate the columns
    $datatable = New-Object System.Data.DataTable
    # Open the text file from disk
    $reader = New-Object System.IO.StreamReader($csvfile)
    $columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
    if ($firstRowColumns -eq $true) { $null = $reader.readLine()}
    Write-Host "Importing to database..."
    foreach ($column in $columns) {
    $null = $datatable.Columns.Add()
    # Read in the data, line by line
    while (($line = $reader.ReadLine()) -ne $null) {
    $null = $datatable.Rows.Add($line.Split($csvdelimiter))
    $i++; if (($i % $batchsize) -eq 0) {
    $bulkcopy.WriteToServer($datatable)
    Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
    $datatable.Clear()
    # Add in all the remaining rows since the last clear
    if($datatable.Rows.Count -gt 0) {
    $bulkcopy.WriteToServer($datatable)
    $datatable.Clear()
    # Clean Up
    Write-Host "CSV file imported in $($elapsed.Elapsed.ToString())."
    $reader.Close(); $reader.Dispose()
    $bulkcopy.Close(); $bulkcopy.Dispose()
    $datatable.Dispose()
    # Sometimes the Garbage Collector takes too long to clear the huge datatable.
    [System.GC]::Collect()
    # Update tblEmployee with imported data
    Write-Host "Updating employee data..."
    $queryUpdateUsers = "UPDATE $($tableEmployee)
    SET $($tableEmployee).EmployeeNumber = $($table).EmployeeNo,
    $($tableEmployee).FirstName = $($table).FirstName,
    $($tableEmployee).LastName = $($table).LastName,
    FROM $($tableEmployee) INNER JOIN $($table) ON $($tableEmployee).UniqueNumber = $($table).UNID
    IF @@ROWCOUNT=0
    INSERT INTO $($tableEmployee) (EmployeeNumber, FirstName, LastName, UniqueNumber)
    SELECT EmployeeNo, FirstName, LastName, UNID
    FROM $($table)"
    try
    Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query $queryUpdateUsers
    Write-Host "Table $($tableEmployee) updated in $($elapsed.Elapsed.ToString())."
    catch
    Write-Host "An error occured when updating $($tableEmployee) $($elapsed.Elapsed.ToString())."
    Write-Host "Script completed in $($elapsed.Elapsed.ToString())."

    I can see that the Export-CSV exports into ANSI though the encoding has been set to UNICODE. Thanks for leading me in the right direction.
    No - it exports as Unicode if set to.
    Your export was wrong and is exporting nothing. Look closely at your code:
    THis line exports nothing in Unicode"
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    There is no input object.
    This line converts any file to ansi
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Set-Content defaults to ANSI so the output file is converted.
    Since you are just dumping into a table by manually building a recorset why not just go direct.  You do not need a CSV.  Just dump theresults of the query to a datatable.
    https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
    This script dumps to a datatable object which can now be used directly in a bulkcopy.
    Here is an example of how easy this is using your script:
    $AllADUsers = Get-ADUser -server $ADServer -searchbase $searchbase -Filter * -Properties GivenName,SN,EmployeeID,objectGUID |
    Where{
    $_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com'
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'
    } |
    Select-Object @{N='UNID';E={$_.objectGuid}},
    @{N='FirstName';Expression = {$_.GivenName}},
    @{N='LastName';Expression = {$_.sn}},
    @{N=/EmployeeNo;Expression = {$_.EmployeeID}} |
    Out-DataTable
    $AllDUsers is now a datatable.  You can just upload it.
    ¯\_(ツ)_/¯

  • Loading data from CSV to Unix database

    Hi All
    We had copied CSV into UNIX box and tried to upload data from CSV to oracle. We are able to load data from CSV's but some CSV are loading perfectly, but are loading extra character into the column.
    Even I tried by putting the CSV files in windows and load the data into UNIX database, I am facing the same problem.
    But if I use the same CSV's and load data in Windows database, It is working fine.
    Can anybody suggest me the solution.
    Regards,
    Kumar.

    ... oh, what a confusion. I still answerded in the ittoolbox group:
    "Hi,
    the problem are the different character sets in the windows and in the unix environment. So some of the characters are missinterpreted.
    Is the database where you first loaded the csv's from a unix box also on unix? How did you copied the files? Via ftp? I hope in ascii mode? "
    Regards,
    Detlef

  • Short Dump - GETWA_NOT_ASSIGNED - Flexible upload of RFD

    Hi,
    SEM 6.0 / BI 7.0
    When i try to do a Flexible upload for a sample RFD, i get the following dump:
    Runtime Errors         GETWA_NOT_ASSIGNED
    Date and Time          11/25/2008 16:22:01
    Short dump has not been completely stored (too big)
    Short text
        Field symbol has not yet been assigned.
    What happened?
        Error in the ABAP Application Program
        The current ABAP program "CL_UC_TASK_EXECUTION==========CP" had to be
         terminated because it has
        come across a statement that unfortunately cannot be executed.
    Error analysis
        You attempted to access an unassigned field symbol
        (data segment 32776).
        This error may occur if
        - You address a typed field symbol before it has been set with
          ASSIGN
        - You address a field symbol that pointed to the line of an
          internal table that was deleted
        - You address a field symbol that was previously reset using
          UNASSIGN or that pointed to a local field that no
          longer exists
        - You address a global function interface, although the
          respective function module is not active - that is, is
          not in the list of active calls. The list of active calls
          can be taken from this short dump.
    Trigger Location of Runtime Error
        Program                               CL_UC_TASK_EXECUTION==========CP
        Include                                 CL_UC_TASK_EXECUTION==========CM02V
        Row                                     155
        Module type                          (METHOD)
        Module Name                        COMPARE_OLD_AND_NEW_DOCS
    Flexible upload - Delete all / Cumulative
    Has anyone encountered this type of dump before?
    My Observation
    (Data Rows contain the following columns - Item / Company / Trading partner / PV LC)
    When i give a trading partner which is not defined in the system, the system is interpreting the header row & data rows. Also, it gives an error message for each row.
    But,
    When i give a trading partner which is defined in the system, or when i remove the trading partner column in total, it throws the above short dump.
    The breakdown category defined in the system are Trading partner & Movement type.
    Both, have the breakdown type 1. (optional breakdown, initialized value allowed).
    Pls note that after several permutations & combinations - to trace the error, i have removed all the columns & kept the minimum required (Item / Company / Trading partner / PV LC).
    Appreciate your comments / inputs.
    Thanks!
    Kumar

    Hi Kumar,
    The note refers to FINBASIS 300 release.
    In the present system we have, it is FINBASIS 600.
    I raised an OSS message.
    Anyway, implementation of the note helped me with all releases.
    But, first of all, I mentioned another remedy that you may see here:
    SPRO: SEM/Business Analytics -> Fin. Basis -> Master Data Framework -> System Settings -> Profile Parameter Setting.
    It's about setting parameter for ABAP shared memory on the server. If you do not set this parameter to 200-300 MB, you constantly will have the errors like I mentioned while trying to save master data. Did you read my previous message carefully and look at this hint?
    In case of breakdown type 1 - the upload should happen - if we dont specify any values for the breakdown category. Pls confirm.
    - Confirmation. The system will accept any value, including null.
    Is it true that we cannot load RFD data (with both movement type & trading partner information), in one go? If yes, what is the reason behind it.
    - No confirmation. Where is it from? I always do such a load.

  • Reading from .CSV and storing it into a collection

    Hi folks,
    Is there a way to make a dynamic procedure to work with .CSV documents and store it into a collection? For example you have to make a procedure to read from .CSV but users upload 10 different version that have different number of columns.
    Normally I would define a record type to match those columns and store it into a collection. However if I don't know the number of columns I would need to define 10 records in advance which I am trying to avoid.
    Problem is I cant define SQL elements on the fly. Meaning on production I don't have the rights to dynamically create a table to match my columns and then drop the table after I no longer need it so I need to store data into a collection.
    And the last option where I would loop through the document and then do the operations I need is not good since the document is a part of other procedures that write and read from it. The idea is to pick the data, store it into a collection, close the file and then work with it.
    This is what I got so far:
    declare
      -- Variables
      l_file      utl_file.file_type;
      l_line      varchar2(10000);
      l_string    varchar2(32000);
      l_delimiter varchar2(10);
      -- Types
      type r_kolona is record(
        column_1 varchar2(500)
       ,column_2 varchar2(500)
       ,column_3 varchar2(500)
       ,column_4 varchar2(500)
       ,column_5 varchar2(500));
      type t_column_table is table of r_kolona;
      t_column    t_column_table := t_column_table();
    begin
      /*Define the delimiter*/
      l_delimiter := ';';
      /*Open file*/
      l_file      := utl_file.fopen( 'some dir', 'some.csv', 'R');
      /*Takes first row of document as header*/
      utl_file.get_line( l_file, l_line);
      loop
        begin
          utl_file.get_line( l_file, l_line);
          /*Delete newline operator*/
          l_string                         := rtrim( l_line, chr(13)) || l_delimiter;
          /*Extend array and insert parsed values */
          t_column.extend;
          t_column(t_column.last).column_1 := substr( l_string, 1, instr( l_string, l_delimiter, 1, 1) - 1);
          t_column(t_column.last).column_2 := substr( l_string, instr( l_string, l_delimiter, 1, 1) + 1, instr( l_string, l_delimiter, 1, 2) - instr( l_string, l_delimiter, 1, 1) - 1);
          t_column(t_column.last).column_3 := substr( l_string, instr( l_string, l_delimiter, 1, 2) + 1, instr( l_string, l_delimiter, 1, 3) - instr( l_string, l_delimiter, 1, 2) - 1);
          t_column(t_column.last).column_4 := substr( l_string, instr( l_string, l_delimiter, 1, 3) + 1, instr( l_string, l_delimiter, 1, 4) - instr( l_string, l_delimiter, 1, 3) - 1);
          t_column(t_column.last).column_5 := substr( l_string, instr( l_string, l_delimiter, 1, 4) + 1, instr( l_string, l_delimiter, 1, 5) - instr( l_string, l_delimiter, 1, 4) - 1);
        exception
          when no_data_found then
            exit;
        end;
      end loop;
      /*Close file*/
      utl_file.fclose(l_file);
      /*Loop through collection elements*/
      for i in t_column.first .. t_column.last
      loop
        dbms_output.put_line(
             t_column(i).column_1
          || ' '
          || t_column(i).column_2
          || ' '
          || t_column(i).column_3
          || ' '
          || t_column(i).column_4
          || ' '
          || t_column(i).column_5);
      end loop;
    exception
      when others then
        utl_file.fclose(l_file);
    end; Stupid version would be to define a record with 50 elements and hope they dont nuke the excel with more columns :)
    Best regards,
    Igor

    Igor S. wrote:
    Use some to query data and then fix wrong entries on prod (insert, update, delete). Manipulate with some and then make new reports. The first that come to mind but basicly is to write a procedure that can be used for ANY .csv so I dont have to rewrite the code.This is logically wrong and smacks of poor design.
    You're wanting to take CSV files with various unknown formats of data, read that data into some generic structure, and then somehow magically be able to process the unknown data to be able to "fix wrong entries". If everything is unknown... how will you know what needs fixing?
    Good design of any system stipulates the structures that are acceptable, and if that means you know there are just 20 possible CSV formats and you can implement a mechanism to determine which format a particular CSV is in (perhaps something in the filename?) then you will create 20 known targets (record structures/tables or whatever) to receive that data into, using 20 external tables, or procedure or whatever is necessary.
    Doing anything other than that is poor design, leaves the code open to breaking, is non-scalable, hard to debug, and just wrong on so many levels. This isn't how software is engineered.
    For example you have 20 developers that have to work with .CSV files. So when someone has to work with a .CSV he would call a procedure with parameters directory and file name. And as a out parameter would get a collection with .CSV stored inside.As others have mentioned, give the developers an Apex application for their data entry/manipulation, working directly on the database with known structures and validation so they can't create "wrong" data in the first place. They can then export that as .CSV data for other purposes if really required.

  • How to Move from CSV to Table in Physical layer in Best possible way

    We had a project which had the source (physical Layer) as CSV. Now there are moving from CSV to a Table in the Database.
    Can anyone guide through the best possible way to complete the task
    With Regards!
    Steve

    Create a new Database node in the Physical layer with connection pool, create a schema folder and drag your CSV object into the new database - All your BMM mappings and hence Presentation objects will move over seamlessly.
    Might be an idea to simply reverse engineer / import any object from your database to initially create your database connection / tree then simply drag the CSV into it.
    Hope this helps,
    Alastair

  • External field catalog & Info object catalogs - Role - in Flexible upload

    1) What is the role of Info Object catalogs maintained in the Data Basis & the Source Data Basis?
    Please be kind enough to mention a scenario u2013 underlining their utility in the consolidation process.
    Like u2026.
    Are they used for loading master data from source system to the BCS system?
    Can they be used in the flexible upload u2013 data collection function?
    Are they used for as a source of AFD data?
    I came across the below documentation on flexible upload - in an SAP material.
    *When uploading from a field catalog, you also have the option of using mapping. In this case, the file structure no longer has to correspond with the structure of the data basis.*
    Understood the above 2 points.
    *Rather, you can assign a BW InfoObjectCatalog that acts as the data structure description for the file here. You specify this InfoObjectCatalog in Customizing for the data basis.*
    2)Does the above statement implies that we need not do any setting in the field catalog tab of the flexible upload? If yes, what do we do?
    In the flexible upload u2013 field catalog tab u2013 we define the data structure for the file we upload (correct me if Iu2019m wrong).
    If you use an external field catalog, you have to specify how you want to map the data structure for the file to the structure for the data basis.
    3)Pls give an example of an external field catalog.
    4)Can we use an external field catalog in the upload of RFD?
    Many Thanks
    Kind Regards,
    Kumar

    There are two possibilities of using Infoobject catalog in SEM-BCS (it is true for both catalogs, characteristics and key figures):
    u2022     In a data basis. The system adds these chars and KFs that are sitting in the defined in the Data Basis catalogs to u201CAdditional Fieldsu201D of each data stream (the tab strip u201CData Stream Fieldsu201D in data basis. If you check some of these fields and generate data basis, these additional infoobjects will be placed into appropriate ODS/DSO objects and you will be able to use them for uploading of some extra information.
    Without indicating the Infoobject Catalog for Chars youu2019ll not be able to configure a new functionality of "assets/liabilities" at all.
    u2022     In a source data basis.
    After including source data basis to your data basis youu2019ll be able to use external infoobjects catalog in a method of the category Flexible Upload. Tick the flag of using the external catalog and choose the SDB. In the mapping tab youu2019ll have a possibility to choose from those chars and KFs that are located in the catalogs.
    This might be used for upload of ANY data, RFD, AFD or master data.
    The scenarios, I guess, are rather obvious.
    Hope this helps.

Maybe you are looking for

  • Macbook Pro (Mid 2012 Version) refuses to power on

    Hello Everybody I have some problem here and I was hoping for some help! Problem:  Macbook Pro refuses to power on. ( no fan sound, no hard disk spinning sound) Event leading up to problem: I was using the laptop when it suddenly it just blanked out(

  • How to convert cs3 psd files for use in Camera Raw?

    I have very many psd files that I want to adjust in Camera Raw and Smart Oblects. They are full resolution scanned Kodachrome slides. Please help.  Kit Pravda

  • Transferring apps from iPhone to iPod touch

    Is it possible to wirelessly transfer apps or files on my iPhone to an iPod touch via WIFI or Bluetooth? I know I can transfer contacts and photos through the "Bump" app. Can I transfer apps without connecting to a computer?

  • Creating large documents in InDesign... is there a limit?

    Hi, I am creating a 1000 page document in InDesign through plug-in. The pages contians simple text objects and a logo, no high-quality graphics etc. But on reaching above 700 pages the application crashes (debug version). The memory for InDesign touc

  • Composite video with iphone 5

    I have a iPhone 5 now and used to use the iPhone iPhone 4 dock to do composite video to play a movie or NetFlix from my iPhone 4...  Any ideas on how ro do this with the iPhone 5.  Do any of the adapters handle the video component?