Import row based complex csv or excel to DB using SSIS

What is the best approach to deal with complex row based object hierarchy in CSV or Excel file in importing into DB?
eg. We have the following CSV or Excel data (Pipe delimited and \r\n separate a new row):
START|\r\n
MEMBER|Mr|John|Lee|\r\n
INSURANCE|2015-1-15|LIFE|\r\n
END|\r\n
START|\r\n
MEMBER|Ms|Mary|Scott|\r\n
INSURANCE|2015-1-10|LIFE|\r\n
END|\r\n
Here each START/END marks one record. The final goal is to transform it into an XML like this:
<Members>
   <Member>
     <Title>Mr<\Title>
     <FN>John<\FN>
     <LN>Lee<\LN>
     <Insurance>
         <Join_Date>2015-1-10<\Join_Date>
         <Category>Life<\Category>
     <\Insurance>
   <\Member>
   <Member>
   <\Member>
<\Members>
 Thanks for any input.

Hi awlinq,
There is no direct option for converting Excel to XML. There are two common methods to achieve this requirement:
Create a schema defines the structure of the XML file based on the requirement as XML Source, then use this it to format the excel file to XML file in Microsoft Excel. For more details, please refer to the following blog:
http://www.excel-easy.com/examples/xml.html
Store data into a SQL table and then convert it into XML by using bulk copy utility (BCP) and FOR XML syntax with T-SQL query. The following links are for your references:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bbc70f56-1421-4970-a3c8-c03e990e0054/convert-ms-excel-file-to-xml-using-ssis?forum=sqlintegrationservices
http://www.codeproject.com/Tips/239156/Export-data-from-SQL-Server-as-XML
Thanks,
Katherine Xiong
If you have any feedback on our support, please click
here.
Katherine Xiong
TechNet Community Support

Similar Messages

  • EXCEL TO SQL USING SSIS Error

    I have an .XLSLX File.  I have a DFD Task to export that xlsx file into SQL. I use SSIS 2008. However when I run the package, it fails with this error below.
    [Connection manager "Excel Connection Manager"] Error: SSIS Error Code DTS_E_OLEDB_EXCEL_NOT_SUPPORTED: The Excel Connection Manager is not supported in the 64-bit version of SSIS, as no OLE DB provider is available.

    Hi Desigal59,
    The BIDS is a 32-bit software, therefore, only 32-bit providers are visible in the IS design GUI. So, you need the 32-bit Microsoft ACE OLE DB Provider to be installed on your local machine to create the package (using OLE DB Source instead of Excel Source
    in the Data Flow Task). If 32-bit Office 2007 or higher is installed on your local machine, the 32-bit Microsoft ACE OLE DB drivers have been installed already, otherwise, you need to install the 32-bit Microsoft Access Database Engine 2010 Redistributable.
    To run the package in 64-bit runtime mode, you need the 64-bit Microsoft ACE OLE DB Provider. However, the 32-bit and 64-bit Microsoft Access Database Engine Redistributable cannot be installed side by side on a single server. So, if you have 32-bit Office
    installed on your local server, you can install the 64-bit Microsoft Access Database Engine 2010 Redistributable and run the package in 64-bit runtime mode in BIDS by setting the Run64BitRuntime property to True. If 32-bit Office is not installed, you have
    to install only the 32-bit
    Microsoft Access Database Engine 2010 Redistributable and cannot run the package in 64-bit runtime mode in BIDS.
    After deploying the package to the SSIS server, if you want it to run in 64-bit runtime mode, you only need to make sure the 64-bit Microsoft ACE OLE DB drivers are installed and the 32-bit drivers are not necessary.
    Another way to create the package is use the 64-bit SQL Server Import and Export Wizard in which the 64-bit Microsoft ACE OLE DB Provider is available. Then, you can save it as a package to the file system from the SQL Server Import and Export Wizard. In this
    way, you don’t need the 32-bit Microsoft ACE OLE DB drivers to develop the package. When you open the package in BIDS, you may encounter errors or the OLE DB Connection Manager works offline, but you can ignore the error and run the package successfully only
    if you have 64-bit ACE OLE DB drivers installed on your computer.
    Regards,
    Mike Yin
    TechNet Community Support

  • How do I import a SPSS .sav file into SQL 2008 r2 using SSIS

    Hi there,
    Like the title says, I'm trying to import an SPSS .SAV file into an SQL 2008 R2 database using SSIS.
    I've tried a few things, but i cant seem to get it to work properly. I'm hoping there's someone out there who could point me in the right direction.
    I've tried the following:
    Tried the IBM step by step manual (http://pic.dhe.ibm.com/infocenter/spssdc/v6r0m1/index.jsp?topic=%2Fcom.spss.ddl%2Faccess_odbc.htm)
    --Couldnt folow this guide because I didnt have the SPSS MR DM-2 OLE DB Provider.
    Tried installing the provider using (http://pic.dhe.ibm.com/infocenter/spssdc/v7r0m0/index.jsp?topic=%2Fcom.spss.ddl%2Fdts_troubleshooting.htm) as a guide to download the provider listed (www-03.ibm.com/software/products/nl/spss-data-model).
    --This didnt work, I still could not see the provider listed after install and rebooting the server.
    Tried to get the file as CSV (Company couldnt provide it in CSV or another format)
    The server is a Windows Server 2008 R2 Enterprise 64-bit, has the Data Collection Data Model installed on it with SQL Server 2008 R2.
    If anyone could point me in the right direction they could make my day!
    Thanks in advance!
    Ronny

    Hi Ronny,
    According to your description, you want to import a SPSS .sav file to SQL Server 2008 R2 database. If that is the case, we can achieve this requirement with SPSS or SQL query. For more details, please refer to the following methods:
    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93346
    http://ilovespss.blogspot.com/2007/10/migrating-data-from-spss-to-sql-server.html
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Loading multiple .csv files into a table using SSIS

    Hi,
    I have a requirement where I have 200+ csv files to be loaded into Netezza table using SSIS.
    Issue I am facing is all columns have different number of columns, for ex, file 1 has columns A,B,C and file 2 has columns C,D,E. My target table has all columns from A to E in it. 
    But, when I am using for each loop container, only the file for which I have specified filepath+filename in loop variable, that is getting loaded. Rest all files, no data is getting loaded from them and package is executing successfully.
    Any help is appreciated.
    Regards,
    VT

    if you want to iterate through files then all files should be in same folder and you should use file enumerator type within ForEach loop. Then inside loop you might need a script task to generate the data flow on the fly based on the avialble input columns
    from the file and do the mapping in the destination. So I assume you put NULLs (or some default value) for missing columns from the file
    http://blog.quasarinc.com/ssis/best-solution-to-load-dynamically-change-csv-file-in-ssis-etl-package/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Extract data from Flat file CSV to SQL Server 2008 using SSIS 2008 (Data gets corrupted when extraction)

    Hi,
    I am trying to extract data from multiple CSV files to SQL into a single table. The data type of all the columns in SQL table is nvarchar(MAX).  I am able to extract the data from the flat files but some of the data(on extraction) is
    corrupt including question marks(?) and other invalid special characters. Also I tried selecting the UTF-8, 65001(Unicode) format but the problem still persists. Also I tried using data converter but no use.
    I checked with the data in the flat file but there is no data with question mark(?) or any other special characters.
    The separator in the flat file is Comma(,)
    Please help.
    Thanks in advace.

    The source system and application determines the code page and encoding. Is it Windows, Unix, Mainframe or some other type? 
    Unicode files sometimes begin with a byte order mark (2 bytes) to indicate little or big endian.  If you open the file in notepad and then select save as, the encoding in the dialog will show the encoding notepad detected based on the BOM.  If
    that is ANSI instead of Unicode or UTF-8, you will need to know the code page the source system used when the file was created.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Macro to compare CSV and Excel file

    Team,
    Do we have any macro to compare CSV and Excel file.
    1) In Excel we should have two text boxes asking the user to select the files from the path where they have Stored
    2) First Text is for CSV file
    3) Second Text box is for Excel file
    4) We Should have Compare button that should run the Macro and Show the Differences.
    Thanks!
    Kiran

    Hi
    Based on my understanding, we need to convert the 2 files into the same format before comparing two different formats files. 
    Here are the options:
    1. Convert the CSV file and Excel file into 2-dim Arrays, then compare them, about how to load a CSV file into VBA array, you can search it from
    internet or ask a solution in VBA forum.
    VBA forum:
    http://social.msdn.microsoft.com/Forums/en-US/home?forum=isvvba
    2. Also you can import CSV file to Excel sheet, and then compare them.
    Here is the sample code for your reference:
    ‘import csv file
    Sub InputCSV()
    Dim Wb As Workbook
    Dim Arr
    Set Wb = GetObject(Application.GetOpenFilename("csv file,*.csv", , "please choose a csv file", , False))
    Arr = Wb.ActiveSheet.Range("A1").CurrentRegion
    Wb.Close False
    Range("A1").Resize(UBound(Arr), UBound(Arr, 2)) = Arr
    End Sub
    ‘compare sheet
    Sub CompareTable()
    Dim tem, tem1 As String
    Dim text1, text2 As String
    Dim i As Integer, hang1 As Long, hang2 As Long, lie As Long, maxhang As Long, maxlie As Long
     Sheets("Sheet1").Select
    Columns("A:A").Select
    With Selection.Interior
    .Pattern = xlNone
    .TintAndShade = 0
    .PatternTintAndShade = 0
    End With
    Range("A1").Select
    Sheets("Sheet2").Select
    Rows("1:4").Select
    With Selection.Interior
    .Pattern = xlNone
    .TintAndShade = 0
    .PatternTintAndShade = 0
    End With
    Range("A1").Select
    MaxRow = 250
    MaxColumn = 40
    For hang1 = 2 To maxhang
    Dim a As Integer
    a = 0
    tem = Sheets(1).Cells(hang1, 1)
    For hang2 = 1 To maxhang
    tem1 = Sheets(2).Cells(hang2, 1)
    If tem1 = tem Then
    a = 1
    Sheets(2).Cells(hang2, 1).Interior.ColorIndex = 6
    For lie = 1 To maxlie
    text1 = Sheets(1).Cells(hang1, lie)
    text2 = Sheets(2).Cells(hang2, lie)
    If text1 <> text2 Then
            Sheets(2).Cells(hang2, lie).Interior.ColorIndex = 8
    End If
    Next
    End If
    Next
    If a = 0 Then
    Sheets(1).Cells(hang1, 1).Interior.ColorIndex = 5
    End If
    Next
    Hope this will help.
    Best Regards
    Lan
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to read a .csv file(excel format) using Java.

    Hi Everybody,
    I need to read a .csv file(excel) and store all the columns and rows in 2d arrays. Then I can do the rest of the coding myself. I would like it if somebody could post their code to read .csv files over here. The .csv file can have different number of columns and different number of rows every time it is ran. The .csv file is in excel format, so I don't know if that affects the code or not. I would also appreciate it if the classes imported are posted too. I would also like to know if there is a way I can recognize how many rows and columns the .csv file has. I need this urgently so I would be very grateful to anybody who has the solution. Thanks.
    Sincerely Taufiq.

    I used this
    BufferedReader in = new BufferedReader (new FileReader ("test.csv"));
    // and                
    StringTokenizer parser = new StringTokenizer (str, ", ");
                    while (parser.hasMoreTokens () == true)
                    { //crap }works like a charm!

  • What is the maximum file size for CSV that Excel can open ? (Excel 2013 64bit)

    Hello,
    Before anyone jumps in, I am not talking about the maximum worksheet size of 1048576 rows by 16384 columns.
    I have  client whom has a 1.5 Gb CSV file, 1.9, 2.6, 5, 17 and 89 Gb file (Huge).
    If I open the 1.5 Gb, the file opens (After waiting 5 minutes) and then a warning pops up that only the first 1048576 rows have loaded. That is fair enough.
    If I try and open any of the others, Excel comes up to a blank worksheet. No errors. It just seems to ignore the file I tried to open. This happens from within Excel (File - open) or from double clicking the file in explorer.
    Excel goes to this blank page almost imeadiatly. It does not even try to open the file.
    If I try with Ms Access, I get a size warning and it refuses to load the file. (At least I get a warning)
    I would have expected Excel to load at least the first 1048576 rows  (If that is what there are in the file), and give an error.
    The computer is more than capable (Xeon processors, 16 Gb ram, SSD hard disks top of the line HP Z820 power workstation).
    With the 1.5 Gb file loaded to 1048576 rows, it uses 15% ram/pagefile. CPU's hit about 5%.
    I have confirmed it is Win 7 64bit, Excel 64bit. I am fairly confident we are over the file size but without an error message, I don't know what to tell my client whom is looking to me for answers.
    I have already discussed that the 89gb file in Excel is unreasonable and they are looking at a stat's package but I need an answer on these smaller files.
    Anyone got any ides ?
    Michael Jenkin (Mickyj) www.mickyj.com (Community website) - SBS MVP (2004 - 2008) *5 times Microsoft MVP award winner *Previously MacWorld Australia contributer *Previously APAC Vice Chairman Culminis (Pro IT User group support system)* APAC chairman GITCA
    *Director Business Technology Partners, Microsoft Small Business Specialist, SMB150 2012 Member

    Hi,
    The 1,048,576 rows & 16,384 columns is the
    workbook size limitation in Excel 2013. Thus, I recommend we try the Mr. Bernie's suggestions to import the large CSV file.
    1. Use VBA to read the file line by line and split/examine the import file in sections. If you have further question about the VBA, please post your question to the MSDN forum for Excel
    http://social.msdn.microsoft.com/Forums/en-US/home?forum=exceldev&filter=alltypes&sort=lastpostdesc
    2. Use Excel 2013 add-ins. Power Pivot and Power Query. For more detailed information, please see the below articles: 
    http://social.technet.microsoft.com/Forums/en-US/9243a533-4575-4fd6-b93a-4b95d21d9b10/table-with-more-than-1-048-576-rows-in-power-query-excel-2013?fo
    http://www.microsofttrends.com/2014/02/09/how-much-data-can-powerpivot-really-manage-how-about-122-million-records/
    Please Note: Since the web site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.
    Thanks
    George Zhao
    Forum Support
    Come back and mark the replies as answers if they help and unmark them if they provide no help.
    If you have any feedback on our support, please click "[email protected]"

  • Adding row into existing CSV file using C#

    How to add row to existing CSV file using .NET Code.the file should not be overwrite,it need add another row with data.how to implement this scenario.

    Hi BizQ,
    If you only just write some data to CSV file. Please follow A.Zaied and Magnus  's reply. In general,we use CSV file to import or export some data. Like following thread and a good article in codeproject
    Convert a CSV file to Excel using C#
    Writing a DataTable to a CSV file
    Best regards,
    Kristin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Formating the Row Based on one column value

    Hi Friends
    I am trying to format the Entire row based on the value of the first column in my Answers.
    Example if first column value in 'F' now i want the Entire row to be colored
    I can do conditional formating on one column but i want to do it on the entire row
    F     8.1 %     12.0 %
    E     5.2 %     3.5 %
    M     2.3 %     3.3 %
    If any one has done this or any suggestions please respond
    Thanks
    Sang

    Its a Pivot View
    F 8.1 % 12.0 %
    E 5.2 % 3.5 %
    M 2.3 % 3.3 %
    the column 1 --> F,E,M are the Product
    the column 2 --> 8.1% , 5.2% , 2.3% are the sales in year 2008
    the column 3 --> 12.0 % , 3.5 %, 3.3 % are the sales in year 2009
    So will i be able to apply the formating in pivot view based on one column to other column If yes please let me know how or
    suggest if this can be done using the BI Office , or BI publisher
    sing the BI Office i can do the formating in Excel but once i refresh the data all the formating is gone ... :(
    I am donno BI Publisher if we have to use BIP please suggest any solution its very very very urgent and important report formating they need here ....
    Thanks in advance David
    sango

  • Count the number of rows based on the values!!!

    Hi all,
    What I am using:
    I am working with a multidimensional database in Visual Studio 2010 using its Data source view and calculation member and dimension usage.
    What I want to do:
    I have a fact table that has five columns(leg(s),hand(s), Head and body,overall) that shows the category of how severe the injury is. Let say for the records all columns never have an empty value(no injury is stated with 'No injury' ) . These five columns
    are connected with a dimension that has all the available values (Category A-E of injury).The overall has the most severe from the other four columns. I want to create a bar chart with five different measure
    values, one for each column, and count the values in those columns. 
    For example : I have a slicer in the excel and a bar chart and the slicer has all the values of the Category of the injury ( Cat a,Cat B, Cat C, ... Cat E, No injury ) and when i select one of them, lets say
    Cat C,  the bar chart should update and show how many Cat C each measurement column has. 
    Example FACT table:
    ID      LEG      HAND    HEAD   BODY OVERALL
    1        No         A           No        No        A
    2        No        D            C          C         C
    3    E         C            D           A         A
    4         E          E           B            C         B
    So if i selected C the bar chart will count   (Leg = 0, Hand = 1, Head = 1, body = 2 and Overall = 1).
    Any ideas ?
    Thanks for the help and the time :) 

    Hi DBtheoN,
    According to your description, you want to create a chart on excel worksheet to count the rows based on the value, right? If in this case, I am afraid this issue is related to Office forum, I am not the expert of Office, you can post the issue on the corresponding
    forum.
    However, this requirement can be done easily on SQL Server Reporting Services. You can using the expression below to count the rows.
    =COUNT(IIF(Fields!LEG.Value=Parameters!TYPE.Value,1,NOTHING))
    Regards,
    Charlie Liao
    TechNet Community Support

  • I need to import data from a CSV file to an Oracle table

    I need to import data from a CSV file to an Oracle table. I'd prefer to use either SQL Developer or SQL Plus code.
    As an example, my target database is HH910TS2, server is ADDb0001, my dB login is em/em, the Oracle table is AE1 and the CSV file is AECSV.
    Any ideas / help ?

    And just for clarity, it's good to get your head around some basic concepts...
    user635625 wrote:
    I need to import data from a CSV file to an Oracle table. I'd prefer to use either SQL Developer or SQL Plus code.SQL Developer is a GUI front end that submits code to the database and displays the results. It does not have any code of it's own (although it may have some "commands" that are SQL Developer specific)
    SQL*Plus is another front end (character based rather than GUI) that submits code to the database and displays the results. It also does not have code of it's own although there are SQL*Plus commands for use only in the SQL*Plus environment.
    The "code" that you are referring to is either SQL or PL/SQL, so you shouldn't limit yourself to thinking it has to be for SQL Developer or SQL*Plus. There are many front end tools that can all deal with the same SQL and/or PL/SQL code. Focus on the SQL and/or PL/SQL side of your coding and don't concern yourself with limitations of what tool you are using. We often see people on here who don't recognise these differences and then ask why their code isn't working when they've put SQL*Plus commands inside their PL/SQL code. ;)

  • Adding rows based on current and next row

    I got some excellent help on multiplying rows based on start and end date in this
    thread, resulting in the query below. It helps me follow Vehicle activity and Vehicle allocation of our vehicles day by day. Now I would like to add another feature to the query, if it is possible.
    The problem is that in our database, only actual tasks are registered, which means that the time when the vehicle is between tasks is not showing. I could of course calculate total available time per vehicle and month, but that would not tell me where the
    vehicles are waiting, when during the day, etc.
    So I would like to insert rows for when the vehicles are standing still, and the logic would be something like this:
    If vehicle number on current row equals vehicle number on the next row and End (date/time) of current row < Start (date/time) of next row, insert row after current row. Set Start=End of current row and End=Start of next row. Set From=To
    of current row and To=To of current row. Set Vehicle activity to "Not active". Finaly copy all other fields from current row.
    Is this possible to achieve in Power Query?
    Brgds,
    Caj
    let
        Source = Sql.Databases("sql10"),
        SLM = Source{[Name="SLM"]}[Data],
        dbo_V_LKPI = SLM{[Schema="dbo",Item="V_LKPI"]}[Data],
        RenamedColumns = Table.RenameColumns(dbo_V_LKPI,{{"ActualDeparture", "Start"}, {"ActualArrival", "End"}}),
         Records = Table.ToRecords(V_LocoKPI),
          DateTime.IsSameDay = (x, y) => Date.Year(x) = Date.Year(y) and Date.Month(x) = Date.Month(y) and Date.Day(x) = Date.Day(y),
          Expand = (x) => List.Generate(
              () => Record.Combine({x, [End=Date.EndOfDay(x[Start])]}),
              (record) => record[Start] <= x[End],
              (record) => let
                  NextStart = Date.StartOfDay(Date.AddDays(record[Start], 1)),
                  NextEnd = Date.EndOfDay(NextStart),
                  ThisEnd = List.Min({NextEnd, x[End]})
              in
                  Record.Combine({record, [Start=NextStart, End=ThisEnd]})),
          Transformed = List.Transform(Records, each if DateTime.IsSameDay([Start], [End]) then {_} else Expand(_)),
          Combined = List.Combine(Transformed),
          Result = Table.FromRecords(Combined)
      in
          Result
    Csten

    Here's some sample code. Again, we use List.Generate to build either a record or a list of records and then use List.Combine to bring the results back together before converting them to a table.
    let
        CombineTwoRows = (x, y) =>
            let
                combine = x[Vehicle] = y[Vehicle] and x[End] < y[Start],
                added = Record.Combine({x, [Start=x[End], End=y[Start], Active=false]}),
                result = if combine then {added, y} else {y}
            in result,
        GenerateStandingRows = (table, combine) =>
            let
                Handle = (x, y) => {x, y},
                buffered = Table.Buffer(table),
                n = Table.RowCount(buffered),
                windows = List.Generate(
                    () => {1, {buffered{0}}},
                    (x) => x{0} <= n,
                    (x) => {x{0} + 1, if x{0} < n then combine(buffered{x{0}-1}, buffered{x{0}}) else {buffered{x{0}}}},
                    (x) => x{1})
            in
                windows,
        InsertInactivity = (table) => Table.FromRecords(List.Combine(GenerateStandingRows(table, CombineTwoRows))),
        TestData = Table.FromRows({
            {1, #datetime(2014, 2, 23, 13, 0, 0), #datetime(2014, 2, 23, 13, 10, 0), true},
            {1, #datetime(2014, 2, 23, 13, 20, 0), #datetime(2014, 2, 23, 13, 30, 0), true},
            {2, #datetime(2014, 2, 23, 13, 20, 0), #datetime(2014, 2, 23, 14, 0, 0), true},
            {2, #datetime(2014, 2, 23, 14, 20, 0), #datetime(2014, 2, 23, 14, 40, 0), true},
            {2, #datetime(2014, 2, 23, 16, 0, 0), #datetime(2014, 2, 23, 17, 0, 0), true},
            {2, #datetime(2014, 2, 24, 2, 0, 0), #datetime(2014, 2, 24, 3, 0, 0), true},
            {3, #datetime(2014, 2, 24, 1, 0, 0), #datetime(2014, 2, 24, 8, 0, 0), true},
            {3, #datetime(2014, 2, 24, 9, 0, 0), #datetime(2014, 2, 24, 10, 0, 0), true}
            }, {"Vehicle", "Start", "End", "Active"})
    in
        InsertInactivity(TestData)

  • .csv's, excel and ssis

    Hi we run 2012 std.  I've been using ssis now for 7 years or more and have always (especially lately) shuddered at the thought of needing to bake excel file functionality into my pkgs.  Some of the nightmares I remember include 32 bit directory
    limitations, special add ons to the sql server, column typing, jet engines, compatibilities, leading zeros, sheets etc etc.  Maybe I'm wrong but i'm under the impression that full/custom control of a .csv's content involves creating it with excel.
    My personal feeling is that it just shouldn't be that difficult.  But that it is and we never know what new surprise the next release of either product brings.
    I'm faced with baking some .csv file creation into my pkg and wonder...can I use a more basic approach to creating these files?  One that uses flat file technology if I'm willing to do the formatting?  I vaguely remember testing this
    theory (by putting my own delimeters in) once and the file did indeed open in excel.     
    I wonder about metadata (related to column headings, data types etc) that might accompany the data in such a file when a .csv is created from excel instead of flat file approach.  I wonder about commas that r in the char data as opposed to being delimeters. 
    Maybe I'll wish I'd never gone down this road because I'd lose control over managing such metadata if the need should arise.  My consumer is a separate business and I may not be in a position to tell them to live with the generic way the file
    was created.
    Does anybody feel my pain?  Any suggestions?  I'm also thinking about reusability thru dmv's, metadata in control tables etc.

    here is the solution I am going with.  It is mostly untested and in need of review for efficiency and maintainability.  Its a first draft.
    I am not a fan of ssis's flat file components.  They affect time to market in an agile environment.   I am a fan of reusability.  I haven't considered yet how this solution can work with linked servers.
    Basically what I've tried to achieve here is dynamic functionality that can take a subset of any table's columns and format them into either a csv or fixed positional multiple row resultset (with a hdr if format is csv).  Both ID (unique
    record identifier) and FileInstanceId (surrogate for original file name) must minimally be present on the source data table whose columns r being formatted for eventual landing in a flat file.  it would be ssis's responsibility to land the resultset
    returned by this tool into a flat file.  
    There r two metadata tables, one at the table level and one at the column level.  Constants r allowed in the 2nd.  There is a proc and a function.  The function made the proc slightly more readable.  There is an option for double quotes
    around string values.  Presently, there r no constraints preventing bogus/contradictory data from being entered into the metadata tables.
    USE [somedb]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[ourAppNameSourceDataTables](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [TableName] [varchar](128) NULL,
    [DatabaseName] [varchar](128) NULL,
    [Format] [varchar](25) NULL,
    [IsActive] [bit] NULL,
    [ModifiedDate] [datetime] NOT NULL,
    CONSTRAINT [PK_ourAppNameSourceDataTables] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[ourAppNameSourceDataTables] ADD DEFAULT (getdate()) FOR [ModifiedDate]
    GO
    USE [somedb]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[ourAppNameSourceDataColumns](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [FK_ourAppNameSourceDataTables] [int] NULL,
    [TargetColumnName] [varchar](128) NULL,
    [Order] [smallint] NULL,
    [SourceColumnName] [varchar](128) NULL,
    [IsConstant] [bit] NULL,
    [ConstantValue] [varchar](100) NULL,
    [FromColumn] [smallint] NULL,
    [ToColumn] [smallint] NULL,
    [AlsoDoubleQuoteDelimit] [bit] NULL,
    [IsActive] [bit] NULL,
    [ModifiedDate] [datetime] NOT NULL,
    CONSTRAINT [PK_ourAppNameSourceDataColumns] PRIMARY KEY CLUSTERED
    [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[ourAppNameSourceDataColumns] ADD DEFAULT (getdate()) FOR [ModifiedDate]
    GO
    USE [somedb]
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    ALTER PROCEDURE [dbo].[usp_createOurAppNameFiles]
    @table varchar(128),
    @database varchar(128),
    @FileInstanceID bigint
    AS
    BEGIN
    SET NOCOUNT ON;
    BEGIN TRY
    declare @format varchar(25)
    declare @fileID int
    declare @hdrVar varchar(max) = null
    declare @lowRecordId bigint
    declare @highRecordId bigint
    declare @currentID bigint
    declare @rowCount bigint
    declare @dtlVar varchar(max)
    declare @tblFinal table (id int primary key, record varchar(max)) -- holds final results
    --get file properties
    select @format = [Format],
    @fileID = [ID]
    from [dbo].[ourAppNameSourceDataTables]
    where [IsActive] = 1 and
    [TableName] = @table and
    [DatabaseName] = @database
    --build a header if csv, coalesce can also be used
    if @format = 'csv' begin
    select @hdrVar =
    (SELECT
    ',' + TargetColumnName
    FROM [dbo].[ourAppNameSourceDataColumns]
    where [FK_ourAppNameSourceDataTables] = @fileID and
    IsActive=1
    order by [Order]
    for xml path (''))
    insert @tblFinal select -1, substring(@hdrVar,2, len(@hdrVar) -1)
    end
    --data type charactersitics
    create table #Meta (SourceColumnName varchar(128) primary key,
    Characteristics varchar(100))
    declare @sqlMeta varchar(max) = 'insert #Meta select COLUMN_NAME, DATA_TYPE from ' +
    @database +
    '.[INFORMATION_SCHEMA].[COLUMNS] where TABLE_SCHEMA + ' +
    '''' + '.' + '''' + ' + TABLE_NAME = ' + '''' + @table + ''''
    exec (@sqlMeta)
    --template
    declare @tblTemplate table (ID int primary key, [Order] smallint, SourceColumnName varchar(128),
    IsConstant bit, ConstantValue varchar(100),
    FromColumn smallint, ToColumn smallint, AlsoDoubleQuoteDelimit bit, Characteristic varchar(100))
    insert @tblTemplate
    select a.*,Characteristics from
    select ID,[Order],case when SourceColumnName is null then '' else SourceColumnName end SourceColumnName,
    IsConstant,ConstantValue,FromColumn,ToColumn,AlsoDoubleQuoteDelimit
    from [dbo].[ourAppNameSourceDataColumns]
    where [IsActive] =1 and
    [FK_ourAppNameSourceDataTables] = @fileId
    ) a
    left join #Meta b
    on a.SourceColumnName = b.SourceColumnName
    drop table #Meta
    --create the equivalent of an unpivot query for the data
    declare @sqlPivotQuery varchar(max)
    SELECT @sqlPivotQuery = (
    select
    ' union all select ID,' +
    case when IsConstant = 1 then '''' + case when ConstantValue is null then '' else ConstantValue end + ''''
    else 'cast(case when ' + SourceColumnName + ' is null then ' + '''' + '''' + ' else ' + SourceColumnName + ' end as varchar(max))' end +
    ',' +
    cast([ORDER] as varchar(5)) +
    ',' + '''' + case when characteristic is null then '' else characteristic end + '''' + ',' +
    case when FromColumn is null then 'null' else cast(FromColumn as varchar(5)) end + ',' +
    case when ToColumn is null then 'null' else cast(ToColumn as varchar(5)) end + ',' +
    cast(AlsoDoubleQuoteDelimit as char(1)) +
    ' from ' +
    @database + '.' + @table +
    ' where FileInstanceId = ' + cast(@FileInstanceId as varchar(19))
    FROM @tblTemplate
    order by [Order]
    for xml path (''))
    set @sqlPivotQuery = 'insert #Pivot ' + substring(@sqlPivotQuery,12,len(@sqlPivotQuery) - 9)
    --unpivot the data
    create table #Pivot (pk int identity(1,1) primary key, ID bigint,value varchar(max),[order] smallint, characteristic varchar(100),
    FromColumn smallint,ToColumn smallint, AlsoDoubleQuoteDelimit bit)
    create index ix on #Pivot(ID)
    exec (@sqlPivotQuery)
    --build required records
    select @rowCount = count(*) from #Pivot
    if @rowCount > 0
    begin
    select @lowRecordId = min(ID) from #Pivot
    select @highRecordId = max(ID) from #Pivot
    set @currentID = @lowRecordId
    While @currentId <= @highRecordId
    begin
    set @dtlVar = ''
    select @dtlVar =
    (SELECT
    case when @format = 'csv' then ',' else '' end +
    case when AlsoDoubleQuoteDelimit = 1 then '"' else '' end +
    dbo.udf_ourAppNameFieldFormat (value,characteristic,FromColumn, ToColumn) +
    case when AlsoDoubleQuoteDelimit = 1 then '"' else '' end
    FROM #Pivot
    where [ID] = @currentId
    order by [Order]
    for xml path (''))
    insert @tblFinal
    select @currentId,case when @format = 'csv' then substring(@dtlVar,2,len(@dtlVar) -1) else @dtlvar end
    select @currentId = min(ID) from #Pivot where ID > @currentId
    if @currentId is null set @currentId = @highRecordId + 1
    end
    end
    drop table #Pivot
    select record from @tblFinal order by id
    END TRY
    BEGIN CATCH
    DECLARE @ErrorMessage VARCHAR(500),@ErrorSeverity VARCHAR(10),@ErrorState CHAR(4)
    SELECT
    @ErrorMessage = ERROR_MESSAGE(),
    @ErrorSeverity = ERROR_SEVERITY(),
    @ErrorState = ERROR_STATE();
    RAISERROR (@ErrorMessage, -- Message text.
    @ErrorSeverity, -- Severity.
    @ErrorState -- State.
    END CATCH
    END
    GO
    USE somedb
    GO
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE FUNCTION [dbo].[udf_ourAppNameFieldFormat]
    @value varchar(max),
    @characteristic varchar(100),
    @FromColumn smallint,
    @ToColumn smallint
    RETURNS varchar(max)
    AS
    BEGIN
    declare @return varchar(max) = ltrim(rtrim(case when @value is null then '' else @value end))
    set @return = case when @characteristic like '%date%' then
    case when @return = '' then @return
    else CONVERT(VARCHAR(8), cast(@return as datetime), 112)
    end
    else @return
    end
    if @FromColumn is null or @ToColumn is null return @return
    if @ToColumn < @FromColumn return null
    if len(@return) >= @ToColumn - @FromColumn + 1 return substring(@return,1,@ToColumn - @FromColumn + 1)
    set @return = @return + replicate(' ', @ToColumn - @FromColumn + 1 - len(@return))
    RETURN @return
    END
    GO

  • Jtable row-based renderer

    Hi,
    I have a JTable with 6 columns and several rows, and I want to paint the rows depending on the value of a cell in that row.
    I looked at the tutorials, and it seems to me that it's easy to paint a whole column, but not a whole row.
    How can I do this ?
    Thanks

    This was from a project I did. It is a renderer for a sortable/Drag and dropable JTable. Rather specialized but it works for me. You should be able to pull out what you need. It actually will color rows based on two different column values.
    package dndjtable;
      import java.awt.Color;
      import java.awt.Component;
      import java.util.*;
      import java.util.ArrayList;
      import javax.swing.JLabel;
      import javax.swing.JTable;
      import javax.swing.table.TableCellRenderer;
        public class dndTableCellRenderer implements TableCellRenderer {
          private JLabel alabel = new JLabel();
          private final long DAY = 1000 * 60 * 60 * 24;
          private final long FOURDAY = DAY * 4;
          private final long WEEK = DAY * 7;
           * Creates a new dndTableCellRenderer object.
          public dndTableCellRenderer() {
              alabel.setHorizontalAlignment(JLabel.LEFT);
              alabel.setOpaque(true);
          public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected,
                                                         boolean hasFocus, int row, int column) {
              DndTableModel dtm = (DndTableModel)table.getModel();
              if(value == null) {
                  alabel.setText("");
              } else {
                  alabel.setText(String.valueOf(value));
              if(dtm != null) {
                  Object[] data = (Object[])((ArrayList)dtm.getListData()).get(row);
                  if(data != null) {
                      long val = (new Date(String.valueOf(data[7]))).getTime();
                      long now = System.currentTimeMillis();
                      if((now - val) > this.WEEK) {    // one week
                          if(isSelected) {
                              alabel.setBackground(DndJTable.REDCELL);
                              alabel.setForeground(table.getSelectionForeground());
                          } else {
                              alabel.setBackground(DndJTable.LIGHTREDCELL);
                              alabel.setForeground(table.getForeground());
                      } else if( ((String)data[4]).compareTo("0") != 0  ) {    // 4 days
                          if(isSelected) {
                              alabel.setBackground(DndJTable.YELLOWCELL);
                              alabel.setForeground(Color.blue);
                          } else {
                              alabel.setBackground(DndJTable.LIGHTYELLOWCELL);
                              alabel.setForeground(table.getForeground());
                      } else {
                          if(isSelected) {
                              alabel.setBackground(DndJTable.SELECTCOLOR);
                              alabel.setForeground(table.getSelectionForeground());
                          } else {
                              alabel.setBackground(table.getBackground());
                              alabel.setForeground(table.getForeground());
                      if(data[9] != null) {
                          if(isSelected) {
                              alabel.setBackground(DndJTable.GREENCELL);
                              alabel.setForeground(table.getSelectionForeground());
                          } else {
                              alabel.setBackground(DndJTable.LIGHTGREENCELL);
                              alabel.setForeground(table.getForeground());
                      if(column == 4 && ((String)data[4]).compareTo("0") != 0){
                          if(isSelected){
                              alabel.setForeground(Color.lightGray);
                          }else{
                              alabel.setForeground(Color.red);
              return alabel;
      }The colors are only my own versions of light colors. They can be changed to anything you wish.
    I hope this helps. I will answer any questions.
    I also gave a link on your other post to some good example JTable demos.
    regards,
    jarshe

Maybe you are looking for