Date Problem in CSV 2.0 Format in CCM

Hii Gurus,
I am new to CCM.Here we are using CSV2.0 Format to upload the catalog files.The supplier part no 10-01 becomes 1-Oct when I save my file.There are like 10 supplier part no in the file all are changing to date format.Right now, we are changing them manually in the master catalog.How to overcome this problem.Any suggestions.Points will be awarded for sure.
                                                                                Pradeep

Hello,
Opening file with notepad was a way to check if your csv was correct with format. Now check the attribute definition in CCM of the supplier part number. All I can tell with my small experience with CCM Probably other can help you better with this !
Rgds,
Pierre

Similar Messages

  • Export Of Data In The .CSV Format With Column Headings On Top

    I'm trying to export data in the .csv format from a report (Crystal Reports XI Release 2).  When I export the data, the heading appears in the first few columns on the left hand side (i.e. columns "A" thru "G") for every row, rather as a column heading on top.  These header describe the data beneath it.  I tried various combinations of export options.  The file needs an extension .csv (or .xls) for the tool that uses the data.  Any suggestions how I can accomplish this ?

    Abhishek,
    I tried to apply your solution, but forgot that the Crystal Reports support desk updated my version of Crystal Reports XI to Release 2 for a previous problem.  My CDs are Crystal Reports XI Release 1.  Now I can't export at all until I figure out how to turn on the export option.  I do have access to older version (version unknown) that came with one of business system.  The options it has are "Character", "Tab", and "Delimeter" for "Separated Values (CSV)".  It does not have the option that you mentioned.  Is there a similar option with this older version ?  Lastly, How can I turn on the "Export" for "Crystal Reports XI Release 2" ?  Thanks ! ! !

  • Issues with data exported in CSV format

    Hi,
    I'm wondering if anyone else is having problems with exporting system data as a CSV? I can successfully run an export, however many of the CSV files contain superfluous line breaks resulting in the comma delimiting not working properly when viewing the records in Excel.
    Does this sound familiar to anyone else?
    Cheers,
    Cameron

    Are you talking about the case where you export some records and when you open using microsoft excel everything looks as if in a text file? If so,then yes. Infact yesterday I was exporting and having the issue. Donot remember having the issue before the upgrade. What I did is opened a blank microsoft excel sheet.Then File>Open and chose the csv file.Then modified in the text import wizard to get it in the right format.

  • Problem in CSV Data Upload

    Hello Experts,
    I need to upload the following data from a CSV file.
    2.087
    2.433
    2.567
    In my DataSource Management screen, on the Preview Tab I see the following. I see these similar results in the DSO and InfoCube.
    2,0870000000000000E+03
    2,4330000000000000E+03
    2,5670000000000000E+03
    In my Bex  Report I see the following results...
    2.087 ERR
    2.433 ERR
    2.567 ERR
    How do I solve this issue? I have defined this field as Floating Point. The length: 16, Decimal position: 16 and external length: 24 appeared as described.
    Thanks & POINTS WILL BE AWARDED if my Problem is solved,
    SD

    Hi SD,
    As you have given the key fig length 16, it will show the same in ODS and Cube.
    But in front end you can change that.Go to the key fig property -->decimal should be 2 and scaling factor 1.
    If you have problem bcos of ERR that is bcos you have not maintained Currency,Create a Cal Key fig--> use NODIM(Original key fig).Hide the orig kf
    Cheers,
    Shana
    Assigning pts is the way of saying thanks in SDN

  • Load csv date problem

    I am using the Data Load option to load a csv file to an Oracle table.
    I am getting a ORA-01843 Invalid Month message.
    The field in the ORacle table is defined as a Timestamp.
    The date data in the csv file is 2006/06/24 13:23:12 format.
    What format do I put in the format box when doing the upload.
    Terry

    Probably you are right, it should work this way, but i tried several times to upload the data OP provided into the table with timestamp column - and gave up. I should admit - i'm not aware how months name are written neither i know the simply fact that hour can be between 1 and 12 .
    Obscure is - uploading into table with date column works perfectly ( simply changing format works either ) . I should probably enable tracing end try to investigate, what is here going on, but currently no time for these experiments...
    Best regards
    Maxim

  • Whenever I try to export my form data into a csv it is not formatted correctly

    Whenever I try to export my data into a CSV and open it in notepad it does not place each row on a seperate line. Is there any way to insert a return at the end of the form or data so that each row is on a seperate line?

    FormsCentral produces CVS files that use CR (Carriage Return) as the row delimiter.  NotePad requires CRLF (Carriage Return + Line Feed).  If you use a different text editor (WordPad, NotePad++) you will find each record on its own line (assuming you do not have word wrap enabled).

  • Internal table data 1E2 automatically convert to scientific format

    Dear all,
    I have been searched for solution moths from the forums and tried all possible methods, but still no way to solve my above problem.  I found a way to solve it partially for us, but may be very helpful for others who meet similly case like mine, so I posted here.
    my problem is when I export my internal table data to Excel, the Cell data with 1E2 auto becomes 1.00E02, and 1E8 becomes 1.00E08, we need it to be 1E2 and 1E8 in excel.
    you can recreate my problem by
    1,  input 1E2 into your Microsoft Excel, then Enter, it will auto change into scientific format. which is we do not want.
    2, use any of your SAP system open any table as long as there is a Char (>3) field in that table. add some
    data entry in that field in the form "any amount (<15) of numeric 1 to 9"E"any one or two numeric 1 to 9". such as, 123E2, 1234E12 etc. then save this table's data to local file spread sheet, or use any FM to download it to a Excel file, when you open this
    file by Excel, the cell with above form will display as scientific. but
    if you put three or more numeric after the "E", such as 123E123 it will
    display correctly.
    what I have done:
    I searched in SCN for similar thread:
        Export to Excel 2007 - item number problem
        Exceding the limit of numbers in Excel at target side
        Excel download cell format problem
        Formating as Text in excel through SAP
        Converting of amount field into excel file through GUI DOWNLOAD
        Data downloaded to excel gets converted to exponential format.
        Problem with Excel download   and scientific number
        Re: Issue in displaying numbers in Excel?
        CSV Flat File Data Problem (Number converting to Scientific Notation)
    Tested accordingly, But none of these works in our case. because our
    ultimate receiver of email attachment will be external third party, we cannot ask
    them to change anything in their Excel.
    Search Microsoft help about Excel, http://support.microsoft.com/kb/214233,
    and it says this "Automatic Number Formatting" is a normal behaviour of excel.
    no way to turn it off, the "work-around" way that Microsoft provides is not suitable for our
    case.
    We test CL_iXML recently arrording to weblog http://wiki.sdn.sap.com/wiki/display/Snippets/FormattedExcelasEmailAttachment
    it successful controled the format. so this could be a solution for others whose internal table size is small. but our 2MB internal table bocome 6MB when converted to xml file attachment, which cannot be received by our end user's mail box. too big.
    So please advise your ideas.
    Many thanks in advance!
    Peter Ding
    Thank you very much for your time!

    Hi,
    You can achieve this by describing the spreadsheet in XML with the help of the DOM classes.
    The later releases of Excel can read and save spreadsheets as XML, providing your release supports this you can achieve it.
    Check out the following Wiki
    [Excel - XML|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/abap/exporting%2bdata%2bto%2bexcel%2b-%2bxml%2bto%2bthe%2brescue]
    Regards,
    Darren

  • Help needed with missing data problem in CRVS2010

    We recently upgraded the reporting engine in our product to use Crystal Reports for Visual Studio 2010 (previously engine was CR9). Our quote report, which has numerous subreports and lots of conditional formatting, started losing data when a quote took more than a single page to be printed. We knew the SQL results included the data, but the report was not printing those lines at all or sometimes printing a partial line. In addition, the running total on the report would exclude the lines that were being missed on the next page. In one example submitted by a customer, 3 lines were skipped between pages.
    I think I have identified two potential issues that document the possibility of data not being included in the report.
    The first potential issue is an issue with the "suppress blank section" option being checked. This issue is supposedly fixed with ADAPT01483793, being released someday with service pack 2 for CRVS2010.
    The second potential issue is using shared variables. This issue is supposedly fixed with ADAPT01484308, also targeted for SP2.
    Our quote report does not explicitly use shared variables with any of the subreports, but it does have several subreports, each in its own section that has the "supress blank section" option checked. We have other reports that use this feature, as well, and they are not exhibiting the problem.
    One different thing about the quote report is that it has a section with multiple suppression options selected. The section has a conditional suppression formula, which controls whether the section is included at all within the report. The section also has the suppress blank section option selected. There are multiple fields within the report that are each conditionally suppressed. In theory, the section's suppress formula could evaluate to true, yet all of the fields within the section are suppressed (due to null values), and then the "suppress blank section" option would kick in.
    The missing data only seems to happen when the section is not being suppressed, and at least one of the fields is being included in the report. If I clear the "suppress blank section" check box, and change the section formula to also include the rules applied to the fields in the section, the missing data problem seems to be resolved.
    Is this related to ADAPT01483793? Will it be fixed in service pack 2?
    If more details are needed, I would be happy to provide a sample report with stored data.

    Hi Don,
    Have a look at the Record Selection formula in CR Designer ( stand alone ) and when exported to RPT format opening that report in the Designer also. 
    There's been a few issues with => logic in the record selection formula. It could be you are running into this problem. Look for NOT inserted into your selection formula.
    Oh and SP2 is coming out shortly so it may resolve the issue. But if you want you could purchase a support, or if you have a support contract then create a case in SMP and get a rep to work with you to debug the issue.
    If you have not try the Trial Version of CR 2011, put it on a VM-ware image or Test PC so you don't corrupt anything for production and have a look at and test it in that designer also. If you purchase a case and it is a bug then you'll get a credit back for the case.
    Don
    Edited by: Don Williams on Oct 26, 2011 7:40 AM

  • How can we export table data to a CSV file??

    Hi,
    I have the following requirement. Initially business agreed upon, exporting the table data to Excel file. But now, they would like to export the table data to a CSV file, which is not being supported by af:exportCollectionActionListener component.
    Because, when i opened the exported CSV file, i can see the exported data sorrounded with HTML tags. Hence the issue.
    Does someone has any solution for this ... Like, how can we export the table data to csv format. And it should work similar to exporting the data to excel sheet.
    For youre reference here is the code which i have used to export the table data..
    ><f:facet name="menus">
    ><af:menu text="Menu" id="m1">
    ><af:commandMenuItem text="Print" id="cmi1">
    ><af:exportCollectionActionListener exportedId="t1"
    >title="CommunicationDistributionList"
    >filename="CommunicationDistributionList"
    >type="excelHTML"/> ---- I tried with removing value for this attribute. With no value, it did not worked at all.
    ></af:commandMenuItem>
    ></af:menu>
    ></f:facet>
    Thanks & Regards,
    Kiran Konjeti

    Hi Alex,
    I have already visited that POST and it works only in 10g. Not in 11g.
    I got the solution for this. The solution is :
    Use the following code in jsff
    ==================
    <af:commandButton text="Export Data" id="ctb1">><af:fileDownloadActionListener contentType="text/csv; charset=utf-8"
    >filename="test.csv"
    >method="#{pageFlowScope.pageFlowScopeDemoAppMB.test}"/>
    ></af:commandButton>
    OR
    <af:commandButton text="Export Data" id="ctb1">><af:fileDownloadActionListener contentType="application/vnd.ms-excel; charset=utf-8"
    >filename="test.csv"
    >method="#{pageFlowScope.pageFlowScopeDemoAppMB.test}"/>
    ></af:commandButton>
    And place this code in ManagedBean
    ======================
    > public void test(FacesContext facesContext, OutputStream outputStream) throws IOException {
    > DCBindingContainer dcBindings = (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
    >DCIteratorBinding itrBinding = (DCIteratorBinding)dcBindings.get("fetchDataIterator");
    >tableRows = itrBinding.getAllRowsInRange();
    preparaing column headers
    >PrintWriter out = new PrintWriter(outputStream);
    >out.print(" ID");
    >out.print(",");
    >out.print("Name");
    >out.print(",");
    >out.print("Designation");
    >out.print(",");
    >out.print("Salary");
    >out.println();
    preparing column data
    > for(Row row : tableRows){
    >DCDataRow dataRow = (DCDataRow)row;
    > DataLoaderDTO dto = (DataLoaderDTO)dataRow.getDataProvider();
    >out.print(dto.getId());
    >out.print(",");
    >out.print(dto.getName());
    >out.print(",");
    >out.print(dto.getDesgntn());
    >out.print(",");
    >out.print(dto.getSalary());
    >out.println();
    >}
    >out.flush();
    >out.close();
    > }
    And do the following settings(*OPTIONAL*) for your browser - Only in case, if the file is being blocked by IE
    ==================================================================
    http://ais-ss.usc.edu/helpdoc/main/browser/bris004b.html
    This resolves implementation of exporting table data to CSV file in 11g.
    Thanks & Regards,
    Kiran Konjeti

  • Problem in CSV Output

    when i am trying look at the csv outpt even if i am not giving the table header row in the template design it is taking some other header rows and displaying it in the output i need just the data in the csv output
    how to solve this issue
    Can you give me any idea how to proceed on this issue.
    Thank you
    Have a Nice Day.

    If some could specify solution for this, it will be great help for me.
    I ll give an example:
    if the data model contains 5 columns and if I design template with 2 columns with some aggregation function included .Then if I try to view the same report in csv format, it displays all the 5 columns from the data model without considering the template. But when I try to view the same report in different format (excel, html, pdf, etc) it is displaying as it is uploaded in the bi publisher. So I request someone to give solution for the same
    And i want add one more question to this post
    when the bi publisher report viewed in a csv format, it includes the column header as well. I want to remove these header from the csv output.
    Edited by: user12511280 on Sep 12, 2011 10:08 PM

  • Comparing SQL Data Results with CSV file contents

    I have the following scenario that I need to resolve and I'm unsure of how to approach it. Let me explain what I am needing and what I have currently done.
    I've created an application that automatically marks assessments that delegates complete by comparing SQL Data to CSV file data. I'm using C# to build the objects required that will load the data from SQL into a dataset which is then compared to the
    associated CSV file that contains the required results to mark against.
    Currently everything is working as expected but I've noticed that if there is a difference in the number of rows returned into the SQL-based dataset, then my application doesn't mark the items at all.
    Here is an example:
    ScenarioCSV contains 4 rows with 8 columns of information, however, let's say that the delegate was only able to insert 2 rows of data into the dataset. When this happens it marks everything wrong because row 1 in both CSV and dataset files were correct,
    however, row 2 in the dataset holds the results found in row 4 in the CSV file and because of this it is comparing it against row 2 in the CSV file.
    How can I check whether a row, regardless of its order, can be marked as it does exist but not in the same order, so I don't want the delegate to lose marks just because the row data in the dataset is not perfectly in the same order of the row data
    in the CSV file???
    I'm at a loss and any assistance will be of huge help to me. I have implemented a ORDER BY clause in the dataset and ensured that the same order is set in the CSV file. This has helped me for scenarios where there are the right number of rows in the dataset,
    but as soon as there is 1 row that is missing in the dataset, then the marking just doesn't allow for any marks for both rows even if the data is correct.
    I hope I've made sense!! If not, let me know and I will provide a better description and perhaps examples of the dataset data and the csv data that is being compared.
    Thanks in advance....

    I would read the CSV into a datatable using oledb. Below is code I wrote a few weeks ago to do this.
    Then you can compare two datatables by a common primary key (like ID number)
    Below is the webpage to compare two datatables
    http://stackoverflow.com/questions/10984453/compare-two-datatables-for-differences-in-c
    You can find lots of examples by perform following google search
    "c# linq compare two dattatable"
    //Creates a CSVReader Class
    public class CSVReader
    public DataSet ReadCSVFile(string fullPath, bool headerRow)
    string path = fullPath.Substring(0, fullPath.LastIndexOf("\\") + 1);
    string filename = fullPath.Substring(fullPath.LastIndexOf("\\") + 1);
    DataSet ds = new DataSet();
    try
    if (File.Exists(fullPath))
    string ConStr = string.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0}" + ";Extended Properties=\"Text;HDR={1};FMT=Delimited\\\"", path, headerRow ? "Yes" : "No");
    string SQL = string.Format("SELECT * FROM {0}", filename);
    OleDbDataAdapter adapter = new OleDbDataAdapter(SQL, ConStr);
    adapter.Fill(ds, "TextFile");
    ds.Tables[0].TableName = "Table1";
    foreach (DataColumn col in ds.Tables["Table1"].Columns)
    col.ColumnName = col.ColumnName.Replace(" ", "_");
    catch (Exception ex)
    MessageBox.Show(ex.Message);
    return ds;
    jdweng

  • Problem import csv file with SQL*loader and control file

    I have a *csv file looking like this:
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    I want to import this csv file to this table:
    create table artikel (artnr varchar2(10), namn varchar2(25), fp_storlek number, datum date, mtrlid varchar2(5), pris number);
    My controlfile looks like this:
    LOAD DATA
    INFILE 'e:\test.csv'
    INSERT
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    I cant get sql*loader to import the last column(pris) as I want. It ignore my decimal point which in this case is "," and not "." maybe this is the problem. If the decimal point is the problem how can I get oracle to recognize "," as a decimal point??
    the result from the import now, is that a decimal number (37,2) becomes 372 in the table

    Set NLS_NUMERIC_CHARACTERS environment variable at OS level, before running SqlLoader :
    $ cat test.csv
    E0100070;EKKJ 1X10/10 1 KV;1;2003-06-16;01C;75
    E0100075;EKKJ 1X10/10 1 KV;500;2003-06-16;01C;67
    E0100440;EKKJ 2X2,5/2,5 1 KV;1;2003-06-16;01C;37,2
    E0100445;EKKJ 2X2,5/2,5 1 KV;500;2003-06-16;01C;33,2
    E0100450;EKKJ 2X4/4 1 KV;1;2003-06-16;01C;53
    E0100455;EKKJ 2X4/4 1 KV;500;2003-06-16;01C;47,1
    $ cat artikel.ctl
    LOAD DATA
    INFILE 'test.csv'
    replace
    INTO TABLE ARTIKEL
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (ARTNR, NAMN, FP_STORLEK char "to_number(:fp_storlek,'99999')", DATUM date 'yyyy-mm-dd', MTRLID, pris char "to_number(:pris,'999999D99')")
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:01 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:11 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C          372
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C          332
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C          471
    6 rows selected.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    $ export NLS_NUMERIC_CHARACTERS=',.'
    $ sqlldr scott/tiger control=artikel
    SQL*Loader: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:41 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Commit point reached - logical record count 6
    $ sqlplus scott/tiger
    SQL*Plus: Release 10.1.0.3.0 - Production on Sat Nov 12 15:10:45 2005
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from artikel;
    ARTNR      NAMN                      FP_STORLEK DATUM      MTRLI       PRIS
    E0100070   EKKJ 1X10/10 1 KV                  1 16/06/2003 01C           75
    E0100075   EKKJ 1X10/10 1 KV                500 16/06/2003 01C           67
    E0100440   EKKJ 2X2,5/2,5 1 KV                1 16/06/2003 01C         37,2
    E0100445   EKKJ 2X2,5/2,5 1 KV              500 16/06/2003 01C         33,2
    E0100450   EKKJ 2X4/4 1 KV                    1 16/06/2003 01C           53
    E0100455   EKKJ 2X4/4 1 KV                  500 16/06/2003 01C         47,1
    6 rows selected.
    SQL>                                                                            Control file is exactly as yours, I just put replace instead of insert.

  • Update date problem

    hi,
    I am updating the date in a table t1 from the variable v1 as( 1/1/3000) in Varchar format. But when i try storing it i am facing problem. Its storing the date ,, but if i fire a select query ... its showing 1/1/2000.
    Not as 1/1/3000
    My question is how to update a date from a variable (as varchar format) to a table as 1/1/3000.
    Explain me with an example.
    Rgds...

    Can you please share me an example...see the example below
    vp2 varchar2(20):='01/01/3000';
    and inserted into dt.d as date .
    SQL> alter session set nls_date_format='dd-mm-rrrr hh24:mi:ss';
    Session altered.
    SQL>
    SQL> desc dt
    select * from dt; Name                            Null?    Type
    D                                  DATE
    A                                  VARCHAR2(20)
    select * from dt;
    SQL> SQL>
    D              A
    29-06-2009 16:57:00 01/01/3000
    declare
    vp2 varchar2(20):='01/01/3000';
    v1 varchar2(100);
    begin
    v1:= 'UPDATE dt SET  d =to_date('''|| vp2 || ''',''dd/mm/rrrr'')';
    DBMS_OUTPUT.PUT_LINE(v1);
    EXECUTE IMMEDIATE(v1);
    end;
      UPDATE dt SET  d =to_date('01/01/3000','dd/mm/rrrr')
    PL/SQL procedure successfully completed.
    SQL> SQL>
    D              A
    01-01-3000 00:00:00 01/01/3000
    SQL>

  • Comma and quote problem in csv file

    Hi
    My requirement is to append data in an csv file. This is Proxy to File FCC Scenario. for some of the fields from proxy which contains comma(,) and also double-quote("). for these fileds the in the csv file it is spiting in to two columns and appending in to the next column. and the double-quote symbol is not inserting in the csv file.
    1. why the double-quote(") is not inserting in to the csv file columns?
    2. how to over come the comma problem? I want that particular file need to append in one column only.
    Thanks
    Vankadoath

    hi vankadoath,
    Were you able to solve comma issue in CSV file.
    Even i am facing similar issue.....
    wheneever there is a comma in field, it is updating data after comma into next field.
    If anybody has solution for the same.......
    pls suggest the same............
    santosh.
    Edited by: santosh koraddi on Jan 20, 2011 9:44 PM

  • Parsing Date Problem

    I have date problem that I am finding hard to resolve. Please
    suggest me what I could do.
    I want to parse String Date values of formats like "19 Jun 2003" and
    "19/06/2003" to a Date object.
    Can I Do it.
    Thanking in Advance.

    Sure. See SimpleDateFormat#parse.
    Kind regards,
      Levi

Maybe you are looking for

  • File sharing: possible to warn second user of a file that it is already open?

    [I hope this forum is the best place for my question.] I run a small business: a medical practice. I use a Mac Mini to store all my documents, and we use two iMacs to access the documents through File Sharing. I do not run OSX Lion Server. All machin

  • RFC call to other ABAP stack

    Hi Experts, I created a custom RFC function in ABAP stack of different client. Created port destination from current client to different ABAP client. How can i call the cusotm RFC in current client? I need data from other client to current system....

  • FPGA Compilation Error (Some signals were not properly constrained in the design)

    Attempting to compile even simple FPGA VIs is returning in failure for me, and I'm not sure why. This is on windows 7, 32 bit, Xilinx 14.4, LabView 2013, for sbRIO-9632. "Some signals were not properly constrained in the design." in the summary. and

  • Image reading problem

    hi, i'm reading a gif image from an input stream like this: InputStream input = document.getBlobProperty(property).getInputStream();           byte[] theBytesTemp = new byte[1048576];           int length = input.read(theBytesTemp);           byte[]

  • Create a User ID

    Hello All, I just installed Solaris Express, Developer Edition. And the only User Id is root. I am not sure that I want to practice and work in the root id. I am not sure, but to me it seems a bit risky. However, I do not know how to create a new use