Route Pattern CSV File Removes "0" from Called Party Prefix Digits Field

I want to upload over 350 route patterns using BAT Tool in CUCM 9.1. All patterns must have their Called Party Prefix Digits (Outgoing Calls) Field containing 10 numbers with the number "0" at the begining. The Problem is the CSV file removes the leading "0" form the digits. I tried to make the cell type in Excel as "Text" and it worked and the "0" is kept normally, but when I save and close the file then open it again, the cell is defaulted to "General" type and the "0" is disappeared again! Changing the CSV file format to any other one and uploading it to CUCM system generates an error stating that the file format is not supported.
Attached is a sample entry of the CSV file. I want to preserve the whole number "0541234567" in "PREFIX_DIGITS_CALLED_PARTY" Field.
Anyone can help me how to upload this big number of route patterns while preserving the number "0" at the begining?

Make sure you change the csv file when using Excel to "Text" on the cell where the string starts with 0, otherwise Excel assumes this is a number and strips it.
HTH, please rate all useful posts!
Chris

Similar Messages

  • Example for loading a csv file into diadem from a labview application

    Hi everyone, i'm using labview 8.2 and DIAdem 10.1.
    I've been searching in NI example finder but I had no luck so far.
    I have already downloaded the labview connectivity VIs.
    Can anyone provide a example that can help me loading a csv file into diadem from a labview application?
    Thanks

    Hi Alexandre.
    I attach an example for you.
    Best Regards.
    Message Edité par R_Duval le 01-15-2008 02:44 PM
    Romain D.
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    NIDays 2010 : Conférence mondiale de l'instrumentation virtuelle
    >>Détails et Inscription<<
    Attachments:
    Classeur1.csv ‏1 KB
    Load CSV to Diadem.vi ‏15 KB

  • Getting a csv file in server from client

    Hi,
    This is my first mail to the forum. I hope some one would help me to find a solution for me. I got a xyz . csv file in my server. I want to get that file and save it in my client (anywhere)having the same file type. Gimme source code if you have. I am looking for a solution for the past two days.
    please reply me as soon as possible.
    To be more clear, I am having a save button in my page.On clicking this button i should fetch the file from server and save it in my desired location as a file itself not as text content or anything.
    Please help me
    Thanks in regards,
    S.karthik

    You can open the textfile on the servlet and then send the information to the client (applet) as stirngs. The must then applet convert the stirngs into some object or simply display the information in someway. But then the text file that you are opening must be stored in some relevant tomcat directory e.g. on the server. If you want to open a file on the clients computer, you get into signed applets.

  • Converting CSV file coming as string in one of the field of xml to XML

    Hi ,
    we have the below requirement where the dat type is :
    DT_MESSAGE
      field1  Data
       field2 DataString
    the csv file of the below format will be sent in the DataString field of xml .
    Source;EIX3;Date;080526;Charge;70199;Si;.42   ;Fe;.20   ;Cu;.00   ;Mn;.027  ;Mg;.49   ;Cr;.0007 ;Ni;0     ;Zn;.01   ;Ti;.01   ;Ag;0     ;B;0     ;Ba;0     ;Be;0     ;Bi;0     ;Ca;0     ;Cd;0     ;Co;0     ;Ga;0     ;Na;0     ;Nb;0     ;Pb;0     ;Sb;0     ;Sn;0     ;Sr;0     ;V;0     ;Zr;0     ;Al;0     ;P;0     ;Hg;0     ;Li;0     ;Alloy;606044;Alloy_Hy;AC10;Diametre;152;Client;IMPOL
    Source;EXF3;Date;080811;Charge;71460;Si;.04   ;Fe;.10   ;Cu;.002  ;Mn;.003  ;Mg;.002  ;Cr;.0006 ;Ni;.003  ;Zn;.014  ;Ti;.000  ;Ag;0     ;B;0     ;Ba;0     ;Be;0     ;Bi;0     ;Ca;0     ;Cd;0     ;Co;0     ;Ga;0     ;Na;0     ;Nb;0     ;Pb;.0010 ;Sb;0     ;Sn;.000  ;Sr;0     ;V;.001  ;Zr;0     ;Al;99.8  ;P;0     ;Hg;0     ;Li;0     ;Alloy;400101;Alloy_Hy;831B9901;Diametre;PFA;Client;DANFOSS
    Source;EXF3;Date;080813;Charge;71535;Si;9.55  ;Fe;.12   ;Cu;1.05  ;Mn;.01   ;Mg;.38   ;Cr;.0008 ;Ni;.00   ;Zn;.01   ;Ti;.01   ;Ag;0     ;B;0     ;Ba;0     ;Be;0     ;Bi;0     ;Ca;.0011 ;Cd;0     ;Co;0     ;Ga;0     ;Na;.0006 ;Nb;0     ;Pb;.0010 ;Sb;.0003 ;Sn;.00   ;Sr;.0002 ;V;0     ;Zr;0     ;Al;0     ;P;.0008 ;Hg;0     ;Li;.0001 ;Alloy;444205;Alloy_Hy;Al AS9C1;Diametre;PFA;Client;TEKSIDPL
    Source;EXF3;Date;080813;Charge;71538;Si;9.57  ;Fe;.12   ;Cu;1.01  ;Mn;.01   ;Mg;.39   ;Cr;.0008 ;Ni;.00   ;Zn;.01   ;Ti;.01   ;Ag;0     ;B;0     ;Ba;0     ;Be;0     ;Bi;0     ;Ca;.0007 ;Cd;0     ;Co;0     ;Ga;0     ;Na;.0004 ;Nb;0     ;Pb;.0010 ;Sb;.0004 ;Sn;.00   ;Sr;.0002 ;V;0     ;Zr;0     ;Al;0     ;P;.0007 ;Hg;0     ;Li;.0001 ;Alloy;444205;Alloy_Hy;Al AS9C1;Diametre;PFA;Client;TEKSIDPL
    Source;EIX3;Date;080813;Charge;71539;Si;.06   ;Fe;.19   ;Cu;.00   ;Mn;.00   ;Mg;.00   ;Cr;0     ;Ni;0     ;Zn;.01   ;Ti;.01   ;Ag;0     ;B;0     ;Ba;0     ;Be;0     ;Bi;0     ;Ca;0     ;Cd;0     ;Co;0     ;Ga;0     ;Na;0     ;Nb;0     ;Pb;0     ;Sb;0     ;Sn;0     ;Sr;0     ;V;0     ;Zr;0     ;Al;99.71 ;P;0     ;Hg;0     ;Li;0     ;Alloy;107070;Alloy_Hy;AC99.7;Diametre;152;Client;ARO TUBI
    where the starting field is Sorce of each row .
    I need to parse the above csv using xslt or java mapping .
    The target structure xml is
    DT_Target
       row  0 to unbounded
    field 1  Source
    field2   Date
    field3   Charge
    field n   Client
    can anyone help me how to parse and map to the target.what could be the better method to do it.
    Thanks and Regards,
    Rajesh

    You can use xsl mapping to  retrieve that payload ina  single field and map it to target
    Please check these links
    String to XML using XSLT..
    Re: String to XML using XSLT
    /people/michal.krawczyk2/blog/2005/11/01/xi-xml-node-into-a-string-with-graphical-mapping
    regards
    Ninad

  • Trigger while importing from .csv file

    hey
    i am importing data from .csv file into a table called temp.
    .csv file is having column sales. the data in that also contains some empty filed or N/A or -
    sales coulmn in my table is NUMBER.
    so while importing some column are not able to import.
    how to write a trigger to eliminate '-' or 'N/A' or ' ' values from the sales column before inserting into the table

    It might be easier if the field on temp was a varchar2 field instead?
    The values entered should be loaded into this field and you can then use the trigger to test the values and convert these into numbers or nulls before inserting onto the proper data table.
    Check out the TO_NUMBER() function in SQL for info on how to convert strings to numbers.
    Regards
    Andy

  • How to upload .CSV file from Application Server

    Hi Experts,
        How to upload .CSV file separated by ',' from Application server to an internal table.
    Invoice No,Cust No,Item Type,Invoice Date,days,Discount Amount,Gross Amount,Sales Amount,Customer Order No.,Group,Pay Terms
    546162,3233,1,9/4/2007,11,26.79,5358.75,5358.75,11264,HRS,11
    546163,2645,1,9/4/2007,11,3.07,305.25,305.25,10781,C,11
    Actually I read some already answered posts. But still I have some doubts.
    Can anybody please send me the code.
    Thanks in Advance.

    Hi Priya,
    Check this code
    Yhe logic used here is as follows,
    Get all the data into an internal table in the simple format ie: a row with one field contains an entire line
    After getting the data, we split each line of the table on every occurrence of the delimiter (comma in your case)
    Here, I have named the fields as field01, field02 etc, you could use your own names according to your requirement
    parameters: p_file(512).
      DATA : BEGIN OF ITAB OCCURS 0,
              COL1(1024) TYPE C,
             END OF ITAB,
             WA_ITAB LIKE LINE OF ITAB.
      DATA: BEGIN OF ITAB_2 OCCURS 0,
        FIELD01(256),
        FIELD02(256),
        FIELD03(256),
        FIELD04(256),
        FIELD05(256),
        FIELD06(256),
        FIELD07(256),
        FIELD08(256),
        FIELD09(256),
        FIELD10(256),
        FIELD11(256),
        FIELD12(256),
        FIELD13(256),
        FIELD14(256),
        FIELD15(256),
        FIELD16(256),
       END OF ITAB_2.
      DATA: WA_2 LIKE LINE OF ITAB_2.
        OPEN DATASET p_FILE FOR INPUT IN TEXT MODE ENCODING NON-UNICODE.
        IF SY-SUBRC = 8.
          WRITE:/ 'File' , p_FILE , 'cannot be opened'.
          LV_LEAVEPGM = 'X'.
          EXIT.
        ENDIF.
        WHILE SY-SUBRC <> 4.
          READ DATASET p_FILE INTO WA_ITAB.
          APPEND WA_ITAB TO ITAB.
        ENDWHILE.
        CLOSE DATASET p_FILE.
      LOOP AT ITAB INTO WA_ITAB.
        SPLIT WA_ITAB-COL1 AT ','    " where comma is ur demiliter
         INTO WA_2-FIELD01 WA_2-FIELD02 WA_2-FIELD03 WA_2-FIELD04
         WA_2-FIELD05 WA_2-FIELD06 WA_2-FIELD07 WA_2-FIELD08 WA_2-FIELD09
         WA_2-FIELD10 WA_2-FIELD11 WA_2-FIELD12 WA_2-FIELD13 WA_2-FIELD14
         WA_2-FIELD15 WA_2-FIELD16.
        APPEND WA_2 TO ITAB_2.
        CLEAR WA_2.
      ENDLOOP.
    Message was edited by:
            Kris Donald

  • Configure ODBC to access a CSV file on Linux for access from BI Server

    How to configure an ODBC connection to a CSV file on Linux for access from the BI Server Repository physical layer
    I am migrating a working windows OBIEE installation to Linux and can not seem to connect to csv files on Linux (from th BI server also running on the same Linux machine).
    I am using SUse Linux Enterprise server / 10 (Slash 10) standard odbc drivers
    My odbc ini file entries are:
    [ODBC]
    Trace=0
    TraceFile=odbctrace.out
    TraceDll=/app/oracle/product/10.1.3/OracleBI/odbc/lib/odbctrac.so
    InstallDir=/app/oracle/product/10.1.3/OracleBI/odbc
    UseCursorLib=0
    IANAAppCodePage=4
    [ODBC Data Sources]
    AnalyticsWeb=Oracle BI Server
    Cluster=Oracle BI Server
    SSL_Sample=Oracle BI Server
    idcbicsvfiles=Odbc Text driver
    [idcbicsvfiles]
    Description = Odbc Text driver
    Driver = Odbc Text driver
    Directory = /data/oracle/OracleBIData/idc
    #ReadOnly = No
    #CaseSensitive = No
    #Catalog = No
    ColumnSeperator = ,
    #Trace = 1
    #Tracefile =/data/oracle/OracleBIData/odbctrace.txt
    #Username      = oracle
    #Password      = ''
    [AnalyticsWeb]
    Driver=/app/oracle/product/10.1.3/OracleBI/server/Bin/libnqsodbc.so
    Description=Oracle BI Server
    ServerMachine=local
    Repository=
    Catalog=
    UID=
    PWD=
    Port=9703
    The csv files I want to use are in the directory /data/oracle/OracleBIData/idc to which I have set up a working and checked connection ([idcbicsvfiles]) on the linux machine itself.
    The error message I get when I select view data in the physcial layer is:
    [NQODBC][SQL_STATE: HY000][nQSError: 10058] A general error has occurred.
    [nQSError: 43093] An error occurred while processing the EXECUTE PHYSICAL statement.
    [nQSError: 16023] The ODBC function has returned ans error. The database may not be available, or the network may be down.
    Please can anybody give me a clue on how to get this working e.g. a working odbc.ini file from your own installation (and/or a tip for the odbc driver choice/configuration)
    P.S. I know this is not supported by Oracle but can not imagine, that nobody is using this.

    Hi,
    Chekc this...Re: Is there ODBC driver for excel flat file in Unix Box
    Re: OBIEE to use a CSV file as a data source on Linux
    Regards,
    Srikanth
    Edited by: Srikanth Mandadi on Oct 8, 2010 2:50 AM

  • Remove and install dial plan installer - the effects on route patterns etc

    Hi All,
    I am hoping someone could answer this one for me quickly.
    I have a scenario during an refresh upgrade to CUCM 10.5(2) where it fails in the install logs with "component_install:807, failed to refresh_upgrade Infrastructure_post components | <LVL::Critical>".
    The fix to this may be that I need to uninstall the AUNP dialplan before upgrading.
    See (https://supportforums.cisco.com/printpdf/12334191)
    What is the effects on route patterns etc when removing the AUNP dial plan?
    Will the configuration be maintained and become operable once I upgrade and re-install the AUNP?
    Thanks in advance
    Kent

    Check your database replication status, this might help:
    https://supportforums.cisco.com/docs/DOC-13672
    HTH,
    Chris

  • Processing Several Records in a CSV File

    Hello Experts!
    I'm currently using XI to process an incoming CSV file containing Accounts Payable information.  The data from the CSV file is used to call a BAPI in the ECC (BAPI_ACC_DOCUMENT_POST).  Return messages are written to text file.  I'm also using BPM.  So far, I've been able to get everything to work - financial documents are successfully created in the ECC   I also receive the success message in my return text file.
    I am, however, having one small problem...  No matter how many records are in the CSV file, XI only processes the very first record.  So my question is this: Why isn't XI processing all the records?  Do I need a loop in my BPM?  Are there occurrence settings that I'm missing?  I kinda figured XI would simply process each record in the file.
    Also, are there some good examples out there that show me how this is done?
    Thanks a lot, I more than appreciate any help!

    Matthew,
    First let me explain the BPM Steps,
    Recv--->Transformation1->For-Each Block->Transformation2->Synch Call->Container(To append the response from BAPI)->Transformation3--->Send
    Transformation3 and Send must be outside Block.
    Transformation1
    Here, the source and target must be same. I think you must be know to split the messages, if not  see the below example
    Source
    <MT_Input>
    <Records>
    <Field1>Value1</Field1>
    <Field2>Value1</Field2>
    </Records>
    <Records>
    <Field1>Value2</Field1>
    <Field2>Value2</Field2>
    </Records>
    <Records>
    <Field1>Value3</Field1>
    <Field3>Value3</Field3>
    </Records>
    </MT_Input>
    Now , I need to split the messages for each Records, so what I can do?
    In Message Mapping, choose the source and target as same and in the Messages tab, choose the target occurrence as 0..Unbounded.
    Now,if you come to Mapping tab, you can see Messages tag added to your structure, and also you can see <MT_Input> occurrence will be changed to 0..unbounded.
    Here is the logic now
    Map Records to MT_INPUT
    Constant(empty) to Records
    Map rest of the fields directly. Now your o/p looks like
    <Messages>
    <Message1>
    <MT_Input>
    <Records>
    <Field1>Value1</Field1>
    <Field2>Value1</Field2>
    </Records>
    </MT_Input>
    <MT_Input>
    <Records>
    <Field1>Value2</Field1>
    <Field2>Value2</Field2>
    </Records>
    </MT_Input>
    <MT_Input>
    <Records>
    <Field1>Value3</Field1>
    <Field3>Value3</Field3>
    </Records>
    </MT_Input>
    </Message1>
    </Messages>
    raj.

  • Block route pattern

    Hello,
    I ran the dial number analyzer and it says a router pattern is blocking the extension im trying to dial on my VOIP network.  I tried tracing what partition and calling search space might cause the blocking but cant tell.  Where in cucm on version 8 can I know the name of the router pattern that is blocking me from dial that extension?
    Thanks,

    You need to collect the detailed callmanager traces and look at the Digit Analysis ( DA ) portion. The list of partitions in the 'pss' field should contain the partition of the Route pattern and further the 'RouteBlockFlag' in the DA DA will say 'Route this pattern' or ' Block this pattern ' , here is an example
    |StationD - stationOutputActivateCallPlane tcpHandle=0x53563d0
    |Digit analysis: match(fqcn="", cn="1000", pss="", dd="")
    |Digit analysis: analysis results
    |PretransformCallingPartyNumber=1000
    |CallingPartyNumber=1000
    |DialingPartition=
    |DialingPattern=
    |DialingRoutePatternRegularExpression=
    |DialingWhere=
    |PatternType=Unknown
    |PotentialMatches=PotentialMatchesExist
    |DialingSdlProcessId=(0,0,0)
    |PretransformDigitString=
    |PretransformTagsList=
    |PretransformPositionalMatchList=
    |CollectedDigits=
    |TagsList=
    |PositionalMatchList=
    |RouteBlockFlag=BlockThisPattern
    HTH
    manish

  • Importing CSV file into AddressBook: Major and Immediate Crash

    I'll start with what I've already tried to solve this problem. I have read through several posts regarding other address book problems. I wanted to try the recommended deletion of file "homedirectory/library/application support/address book/AddressBook-v22.abcddb". That file does not seem to exist on my computer for me to even try delete it. I was able to locate the "libray folder" and get seamlessly to the "application support" folder, but there is no "address book" folder in the "application support" folder.
    The problem is with importing a CSV file (created/saved from Outlook Express) into my Mac AddressBook. I've made 10 attempts and have gotten 10 crash messages. The first part of the crash message is posted below.
    Is there anything I can do to solve this problem? Is it because I've created a CSV file from Outlook Express that it won't work? I have successfully imported other CSV files (like an old, ancient CSV file that I created from a nextel phone years ago and a family address file that I recently created on my MacBook in Excel.)
    Here's the first page or so of the crash message... the entire report is about 28 pages:
    Process: Address Book [254]
    Path: /Applications/Address Book.app/Contents/MacOS/Address Book
    Identifier: com.apple.AddressBook
    Version: 5.0.1 (864)
    Build Info: AddressBook-8640000~4
    Code Type: X86-64 (Native)
    Parent Process: launchd [144]
    Date/Time: 2009-12-16 15:47:00.744 -0500
    OS Version: Mac OS X 10.6.2 (10C540)
    Report Version: 6
    Interval Since Last Report: 994434 sec
    Crashes Since Last Report: 10
    Per-App Interval Since Last Report: 8923 sec
    Per-App Crashes Since Last Report: 10
    Anonymous UUID: 9EDF5817-9DD1-4C21-9CE5-F6882E750B54
    Exception Type: EXC_CRASH (SIGABRT)
    Exception Codes: 0x0000000000000000, 0x0000000000000000
    Crashed Thread: 11 Dispatch queue: com.apple.root.default-priority
    Application Specific Information:
    abort() called
    * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (7) beyond bounds (7)'
    * Call stack at first throw:
    0 CoreFoundation 0x00007fff82bf7444 __exceptionPreprocess + 180
    1 libobjc.A.dylib 0x00007fff82d3b0f3 objcexceptionthrow + 45
    2 CoreFoundation 0x00007fff82bf7267 +[NSException raise:format:arguments:] + 103
    3 CoreFoundation 0x00007fff82bf71f4 +[NSException raise:format:] + 148
    4 Foundation 0x00007fff83fd5080 _NSArrayRaiseBoundException + 122
    5 Foundation 0x00007fff83f37b81 -[NSCFArray objectAtIndex:] + 75
    6 AddressBook 0x00007fff833af1df -[ABImportMappingModel(PrivateMappingToPersonConversion) addressDictionaryForMapping:rowData:localizedAddressMappings:] + 348
    7 AddressBook 0x00007fff833af4d3 -[ABImportMappingModel(MappingToPersonConversion) personWithRowData:localizedAddressMappings:addressBook:] + 597
    8 AddressBook 0x00007fff833ace87 __-[ABNewTextFileImportController import:]block_invoke1 + 153
    9 Foundation 0x00007fff83f7a7d9 -[NSBlockOperation main] + 140
    10 Foundation 0x00007fff83f6b06d -[__NSOperationInternal start] + 681
    11 Foundation 0x00007fff83f6ad23 ___startOperations_block_invoke2 + 99
    12 libSystem.B.dylib 0x00007fff8617ace8 dispatch_call_block_andrelease + 15
    13 libSystem.B.dylib 0x00007fff86159279 dispatch_workerthread2 + 231
    14 libSystem.B.dylib 0x00007fff86158bb8 pthreadwqthread + 353
    15 libSystem.B.dylib 0x00007fff86158a55 start_wqthread + 13

    Pamela MacBeginner wrote:
    I'll start with what I've already tried to solve this problem. I have read through several posts regarding other address book problems. I wanted to try the recommended deletion of file "homedirectory/library/application support/address book/AddressBook-v22.abcddb". That file does not seem to exist on my computer for me to even try delete it.
    it does exist on your computer. you are looking in the wrong library folder. you need the library folder in your home directory, not at the top level of the drive. click on the house icon in the sidebar of any finder window. that's your home directory. you can also get to it from the top level of the drive by going to /users/yourusername.
    if deleting this file does not help try converting the CSV file to a vcard before importing it to AB. you can use this converter for example
    http://homepage.mac.com/phrogz/CSV2vCard_v2.html
    I was able to locate the "libray folder" and get seamlessly to the "application support" folder, but there is no "address book" folder in the "application support" folder.
    The problem is with importing a CSV file (created/saved from Outlook Express) into my Mac AddressBook. I've made 10 attempts and have gotten 10 crash messages. The first part of the crash message is posted below.
    Is there anything I can do to solve this problem? Is it because I've created a CSV file from Outlook Express that it won't work? I have successfully imported other CSV files (like an old, ancient CSV file that I created from a nextel phone years ago and a family address file that I recently created on my MacBook in Excel.)
    Here's the first page or so of the crash message... the entire report is about 28 pages:
    Process: Address Book [254]
    Path: /Applications/Address Book.app/Contents/MacOS/Address Book
    Identifier: com.apple.AddressBook
    Version: 5.0.1 (864)
    Build Info: AddressBook-8640000~4
    Code Type: X86-64 (Native)
    Parent Process: launchd [144]
    Date/Time: 2009-12-16 15:47:00.744 -0500
    OS Version: Mac OS X 10.6.2 (10C540)
    Report Version: 6
    Interval Since Last Report: 994434 sec
    Crashes Since Last Report: 10
    Per-App Interval Since Last Report: 8923 sec
    Per-App Crashes Since Last Report: 10
    Anonymous UUID: 9EDF5817-9DD1-4C21-9CE5-F6882E750B54
    Exception Type: EXC_CRASH (SIGABRT)
    Exception Codes: 0x0000000000000000, 0x0000000000000000
    Crashed Thread: 11 Dispatch queue: com.apple.root.default-priority
    Application Specific Information:
    abort() called
    * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (7) beyond bounds (7)'
    * Call stack at first throw:
    0 CoreFoundation 0x00007fff82bf7444 __exceptionPreprocess + 180
    1 libobjc.A.dylib 0x00007fff82d3b0f3 objcexceptionthrow + 45
    2 CoreFoundation 0x00007fff82bf7267 +[NSException raise:format:arguments:] + 103
    3 CoreFoundation 0x00007fff82bf71f4 +[NSException raise:format:] + 148
    4 Foundation 0x00007fff83fd5080 _NSArrayRaiseBoundException + 122
    5 Foundation 0x00007fff83f37b81 -[NSCFArray objectAtIndex:] + 75
    6 AddressBook 0x00007fff833af1df -[ABImportMappingModel(PrivateMappingToPersonConversion) addressDictionaryForMapping:rowData:localizedAddressMappings:] + 348
    7 AddressBook 0x00007fff833af4d3 -[ABImportMappingModel(MappingToPersonConversion) personWithRowData:localizedAddressMappings:addressBook:] + 597
    8 AddressBook 0x00007fff833ace87 __-[ABNewTextFileImportController import:]block_invoke1 + 153
    9 Foundation 0x00007fff83f7a7d9 -[NSBlockOperation main] + 140
    10 Foundation 0x00007fff83f6b06d -[__NSOperationInternal start] + 681
    11 Foundation 0x00007fff83f6ad23 ___startOperations_block_invoke2 + 99
    12 libSystem.B.dylib 0x00007fff8617ace8 dispatch_call_block_andrelease + 15
    13 libSystem.B.dylib 0x00007fff86159279 dispatch_workerthread2 + 231
    14 libSystem.B.dylib 0x00007fff86158bb8 pthreadwqthread + 353
    15 libSystem.B.dylib 0x00007fff86158a55 start_wqthread + 13

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • Why ResultSet getDate() method returns null when querying .csv file?

    Here is the full code:
    import java.sql.*;
    import java.sql.Types;
    import java.sql.Date;
    import myjava.support.CachedRowSetMaker;
    import javax.sql.rowset.CachedRowSet;
    import java.io.IOException;
    import java.text.SimpleDateFormat;
    import java.util.Calendar;
    class jdbc2{
    final private String s1="SELECT top 10 [DATE], [ADJ CLOSE] FROM [vwo-1.csv]";
    private ResultSet result=null;
    private Connection conn=null;
    public static void main(String[] args) throws SQLException{
    jdbc2 db=new jdbc2();
    try {
              Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
              db.conn = DriverManager.getConnection("jdbc:odbc:STOCK_DATA");
              PreparedStatement sql=db.conn.prepareStatement(db.s1);
              db.result=sql.executeQuery();
    // check column names and types using the ResultSetMetaData object.
              ResultSetMetaData metaData = db.result.getMetaData();
         System.out.println("Table Name : " + metaData.getTableName(2));
         System.out.println("Field\t\tDataType");
         for (int i = 0; i < metaData.getColumnCount(); i++) {
         System.out.print(metaData.getColumnName(i + 1) + "\t");
         System.out.println(metaData.getColumnTypeName(i+1));
         System.out.print(metaData.getColumnName(1) + "\t"+metaData.getColumnName(2)+"\n");
              while (db.result.next()){
                   System.out.print(db.result.getDate("DATE", Calendar.getInstance()));
                   System.out.format("\t%,.2f\n", db.result.getFloat("Adj Close"));
    catch (Exception e) {
    System.out.println("Error: " + e.getMessage());
         finally {
              db.result.close();
              db.conn.close();
    Everything works well, until getting to the block
              while (db.result.next()){
                   System.out.print(db.result.getDate("DATE", Calendar.getInstance()));
                   System.out.format("\t%,.2f\n", db.result.getFloat("Adj Close"));
    The getDate("DATE", Calendar.getInstance())); always returns null, instead of the date value in the vwo-1.csv.
    Even though I change it to
    java.sql.Date d=db.result.getDate("DATE") and convert to String using .toString(), I still gets nulls. The dollar amount in "Adj Close" field is fine, no problem.
    The .csv fils is downloaded from YahooFinace.
    Can anyone review the code and shed some light as to what I did wrong?
    Thanks alot.

    CREATE TABLE `login` (
    `username` varchar(40) DEFAULT NULL,
    `password` varchar(40) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `amount` (
    `amountid` int(11) NOT NULL,
    `receiptid` int(11) DEFAULT NULL,
    `loanid` int(11) DEFAULT NULL,
    `amount` bigint(11) DEFAULT NULL,
    `latefee` int(11) DEFAULT NULL,
    `paymentid` int(11) DEFAULT NULL,
    `pid` int(11) DEFAULT NULL,
    PRIMARY KEY (`amountid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `applicationfee` (
    `applicationfeeid` int(11) DEFAULT NULL,
    `applicationamount` int(11) DEFAULT NULL,
    `applicationfee` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `category` (
    `categoryid` int(11) DEFAULT NULL,
    `categoryname` varchar(40) DEFAULT NULL,
    `categorydescription` varchar(500) DEFAULT NULL,
    `cattype` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `commission` (
    `commissionid` int(11) DEFAULT NULL,
    `bussiness` int(11) DEFAULT NULL,
    `commission` int(11) DEFAULT NULL,
    `pid` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `customer` (
    `cacno` int(11) NOT NULL DEFAULT '0',
    `name` varchar(40) DEFAULT NULL,
    `age` int(11) DEFAULT NULL,
    `cphone` varchar(40) DEFAULT NULL,
    `cmobile` varchar(40) DEFAULT NULL,
    `caddress` varchar(500) DEFAULT NULL,
    `cstatus` varchar(20) DEFAULT NULL,
    `cphoto` longblob,
    `pid` int(11) DEFAULT NULL,
    PRIMARY KEY (`cacno`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `daybook` (
    `closingbal` varchar(40) DEFAULT NULL,
    `date` date DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `extraincome` (
    `categoryid` int(11) NOT NULL,
    `receiptid` int(11) DEFAULT NULL,
    `date` date DEFAULT NULL,
    `amountid` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `employee` (
    `empno` int(11) DEFAULT NULL,
    `empname` varchar(40) DEFAULT NULL,
    `age` int(11) DEFAULT NULL,
    `sal` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `image` (
    `id` int(11) DEFAULT NULL,
    `image` blob
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `loan` (
    `loanid` int(11) NOT NULL DEFAULT '0',
    `loanamt` varchar(40) DEFAULT NULL,
    `payableamount` double DEFAULT NULL,
    `installment` int(11) DEFAULT NULL,
    `payableinstallments` int(11) DEFAULT NULL,
    `monthlyinstallment` varchar(20) DEFAULT NULL,
    `surityname` varchar(20) DEFAULT NULL,
    `applicationfeeid` int(11) DEFAULT NULL,
    `interestrate` float DEFAULT NULL,
    `issuedate` date DEFAULT NULL,
    `duedate` date DEFAULT NULL,
    `nextduedate` date DEFAULT NULL,
    `cacno` int(11) DEFAULT NULL,
    `cname` varchar(20) DEFAULT NULL,
    `pid` int(11) DEFAULT NULL,
    `interestamt` double DEFAULT NULL,
    `pendingamt` float DEFAULT NULL,
    PRIMARY KEY (`loanid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `md` (
    `mdid` int(11) NOT NULL DEFAULT '0',
    `mdname` varchar(40) DEFAULT NULL,
    `mdphoto` varchar(100) DEFAULT NULL,
    `mdphone` varchar(40) DEFAULT NULL,
    `mdmobile` varchar(40) DEFAULT NULL,
    `mdaddress` varchar(500) DEFAULT NULL,
    PRIMARY KEY (`mdid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `partner` (
    `pid` int(11) NOT NULL DEFAULT '0',
    `pname` varchar(40) DEFAULT NULL,
    `paddress` varchar(500) DEFAULT NULL,
    `pphoto` varchar(100) DEFAULT NULL,
    `pphone` varchar(40) DEFAULT NULL,
    `pmobile` varchar(40) DEFAULT NULL,
    `pstatus` varchar(20) DEFAULT NULL,
    `mdid` int(11) DEFAULT NULL,
    `mdname` varchar(40) DEFAULT NULL,
    `date` date DEFAULT NULL,
    `nextpaydate` date DEFAULT NULL,
    PRIMARY KEY (`pid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `partnerinvested` (
    `pid` int(11) DEFAULT NULL,
    `pname` varchar(20) DEFAULT NULL,
    `receiptid` int(11) DEFAULT NULL,
    `date` date DEFAULT NULL,
    `amountinvested` int(11) DEFAULT NULL,
    `latefee` int(11) DEFAULT NULL,
    `amountid` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `payments` (
    `paymentid` int(11) NOT NULL,
    `categoryid` int(11) DEFAULT NULL,
    `particulars` varchar(100) DEFAULT NULL,
    `amountid` int(11) DEFAULT NULL,
    `paymentdate` date DEFAULT NULL,
    PRIMARY KEY (`paymentid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    CREATE TABLE `receipts` (
    `receiptid` int(11) DEFAULT NULL,
    `paiddate` date DEFAULT NULL,
    `amountid` int(11) DEFAULT NULL,
    `loanid` int(11) DEFAULT NULL,
    `latefee` int(11) DEFAULT NULL,
    `installment` int(11) DEFAULT NULL,
    `cacno` int(11) DEFAULT NULL,
    `cname` varchar(40) DEFAULT NULL,
    `pid` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

  • Loading csv file trying to replace double quotes

    Hi, I am trying to load a csv file with a field with double qoutes that I want to replace the double qoutes to one. The CSV file I will enter
    "TEST,2345" in a field and when it loads its as ""TEST,2345""
    So I got this code to handle it.
    Pattern quotesRegex = Pattern.compile("\"\"");
              Matcher m = fieldRegex.matcher(line);
              while (m.find()) {
                   String field;
                   if (m.group(1) != null) {
                        // If a quoted string was matched replace escaped double quote,
                        // "" with "
                        field = quotesRegex.matcher(m.group(1)).replaceAll("\"");
    else {
                        field = m.group(2);
                   fields.add(field.trim());
    I want to change a line like this
    D,,""TEST,2345"",46,,
    to
    D,,"TEST,2345",46,,
    but afterwards nothing has changed the record looks as below
    D,,""TEST,2345"",46,,
    Whats wrong with this?
    thanks

    I simply created a simple csv file with one field that I entered a double qoute of "TEST,2345"
    Later when reading in the file(using the code I pasted in) I can inspect it and its """TEST,2345"""
    The relaceAll doesnt seem to change anything(nor did my code). So I was stumped. thanks

  • Correlation Between several Different Arrays & Saving Results Into a.CSV file

    Hello Everyone!
    I have 3 arrays from which I initialize a .CSV file when I run my VI (
    as you can see in the example file.)( I use data from other arrays as
    well but it is not important )
    The First array "customer Number" can be from 0 - 15
    The second array "Customer Present" is a Boolean.
    The Third array will contain some fix values like " milk, meat, clothing....." it will have 14 different categories.
    The Fifth array will contain the quantity of the products from each category from the previous array.
    My job is to add in my initialized .CSV file the values from the 4th and 5th array but only for the customers that are present.
    The result should look something like the second part of the doc I give( which sadly, I created by hand)
    If no customer has bought something from a category it should not be mentioned.
    Also it would be nice for me if I separated the file for each customer ( create 16 different files )
    I hope someone has some idea, because I don't.
    I figured out that I should have the bool array into a case structure and if the customer is present then........
    Thank you in advance!
    Attachments:
    Initilized&Data.doc ‏42 KB
    Initialize.GIF ‏21 KB

    I don't understand where correlation fits into this...
    It seems to me that you're trying to write out a file from the values on front panel controls, rather than the other way around, as Bernd was suggesting. I see no actual read functions in your screenshot. What does your front panel look like? Can you post the VI and give a brief description on how it's supposed to be used? You seem to have some front panel arrays that you're indexing out. For this you should be using auto-indexing rather than using a fixed constant. Also, the local variables are completely unnecessary - what I see is text-based programming.

Maybe you are looking for

  • How can I execute a dos command?

    I tryed so: import java.io.BufferedReader; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintStream; public class Builder {      public static void main(String[]args)                //m

  • NVL Problem in select query

    Hi, Thanks to all oracle mem I have a problem when i select the row from table with null I want return the result as 1 when i the values are null select nvl(MC/(recd_qty*swb_qty,3) MC_VAL from sys_prlot_sales if the values of recd_qty ,swb_qty are nu

  • 2nd Monitor died - How does one grab tools back from "that" desktop?

    Hi - One of our two monitors died. There are tools on the 2nd monitor's desktop that we can't "see" until the 2nd monitor returns. How do we pull those tools over to the main desktop? Also, there are a few finder windows that are too large to grab th

  • How to avoid Unicode errors in SAP custom code queries.

    Currently we are going for a non Unicode technical upgrade from 4.6C to ECC 6.0. We have many query infosets with custom ABAP code. Unable to execute these queries (infosets) as ECC 6.0 system is throwing short dump and query infoset editor throwing

  • How to cut and paste long Word document into text generator?

    I have a very long list of attributions in a Word document. When I cut and paste it into the "Sample Text" area, it goes in fine. But when I try to look at it ... either as simple text or, preferred, scrolling ... I only get parts of the document. I