Power Query Doesn't Load Complete CSV FIle After Pivot Transform
Hello, After transforming a CSV file using the following code I can load the entire file (45,000 rows) into the Excel Data Model.
let
Source = Csv.Document(File.Contents("C:\Users\jd\Desktop\2014 JW Annual Review\ALL STORES 01-07-15 AR OPEN.csv"),null,",",null,1252),
#"First Row as Header" = Table.PromoteHeaders(Source),
#"Changed Type" = Table.TransformColumnTypes(#"First Row as Header",{{"AR-OPEN", type text}, {"SALE CO", type text}, {"NAME", type text}, {"N-CD", Int64.Type}, {"REFER", type text}, {"JRNL CO", type text}, {"SCHED", type text}, {"AGME.CO.ID", Int64.Type}, {"SO", type text}, {"ODATE", type text}, {"JRNL TYPE", type text}, {"CONTROL", type text}, {"DATE", type text}, {"A-ACCT", type text}, {"CONTROL2", type text}}),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"AR-OPEN"}),
#"Split Column by Delimiter" = Table.SplitColumn(#"Removed Columns","SALE CO",Splitter.SplitTextByDelimiter(" "),{"SALE CO.1", "SALE CO.2", "SALE CO.3", "SALE CO.4", "SALE CO.5", "SALE CO.6", "SALE CO.7", "SALE CO.8", "SALE CO.9", "SALE CO.10", "SALE CO.11", "SALE CO.12", "SALE CO.13", "SALE CO.14", "SALE CO.15", "SALE CO.16", "SALE CO.17", "SALE CO.18", "SALE CO.19", "SALE CO.20", "SALE CO.21", "SALE CO.22", "SALE CO.23", "SALE CO.24", "SALE CO.25"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"SALE CO.1", Int64.Type}, {"SALE CO.2", type text}, {"SALE CO.3", type text}, {"SALE CO.4", type text}, {"SALE CO.5", type text}, {"SALE CO.6", type text}, {"SALE CO.7", type text}, {"SALE CO.8", type text}, {"SALE CO.9", type text}, {"SALE CO.10", type text}, {"SALE CO.11", type text}, {"SALE CO.12", type text}, {"SALE CO.13", type text}, {"SALE CO.14", type text}, {"SALE CO.15", type text}, {"SALE CO.16", type text}, {"SALE CO.17", type text}, {"SALE CO.18", type text}, {"SALE CO.19", type text}, {"SALE CO.20", type text}, {"SALE CO.21", type text}, {"SALE CO.22", type text}, {"SALE CO.23", type text}, {"SALE CO.24", type text}, {"SALE CO.25", type text}, {"NAME_1", type text}, {"DATE_2", type text}, {"NAME_3", type text}, {"CONTROL_4", type text}}),
#"Split Column by Delimiter1" = Table.SplitColumn(#"Changed Type1","REFER",Splitter.SplitTextByDelimiter(" "),{"REFER.1", "REFER.2", "REFER.3", "REFER.4", "REFER.5", "REFER.6", "REFER.7", "REFER.8", "REFER.9", "REFER.10", "REFER.11", "REFER.12", "REFER.13", "REFER.14", "REFER.15", "REFER.16", "REFER.17", "REFER.18", "REFER.19", "REFER.20", "REFER.21", "REFER.22", "REFER.23", "REFER.24", "REFER.25"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Split Column by Delimiter1",{{"REFER.1", type text}, {"REFER.2", type text}, {"REFER.3", type text}, {"REFER.4", type text}, {"REFER.5", type text}, {"REFER.6", type text}, {"REFER.7", type text}, {"REFER.8", type text}, {"REFER.9", type text}, {"REFER.10", type text}, {"REFER.11", type text}, {"REFER.12", type text}, {"REFER.13", type text}, {"REFER.14", type text}, {"REFER.15", type text}, {"REFER.16", type text}, {"REFER.17", type text}, {"REFER.18", type text}, {"REFER.19", type text}, {"REFER.20", type text}, {"REFER.21", type text}, {"REFER.22", type text}, {"REFER.23", type text}, {"REFER.24", type text}, {"REFER.25", type text}}),
#"Split Column by Delimiter2" = Table.SplitColumn(#"Changed Type2","JRNL CO",Splitter.SplitTextByDelimiter(" "),{"JRNL CO.1", "JRNL CO.2", "JRNL CO.3", "JRNL CO.4", "JRNL CO.5", "JRNL CO.6", "JRNL CO.7", "JRNL CO.8", "JRNL CO.9", "JRNL CO.10", "JRNL CO.11", "JRNL CO.12", "JRNL CO.13", "JRNL CO.14", "JRNL CO.15", "JRNL CO.16", "JRNL CO.17", "JRNL CO.18", "JRNL CO.19", "JRNL CO.20", "JRNL CO.21", "JRNL CO.22", "JRNL CO.23", "JRNL CO.24", "JRNL CO.25"}),
#"Changed Type3" = Table.TransformColumnTypes(#"Split Column by Delimiter2",{{"JRNL CO.1", Int64.Type}, {"JRNL CO.2", type text}, {"JRNL CO.3", type text}, {"JRNL CO.4", type text}, {"JRNL CO.5", type text}, {"JRNL CO.6", type text}, {"JRNL CO.7", type text}, {"JRNL CO.8", type text}, {"JRNL CO.9", type text}, {"JRNL CO.10", type text}, {"JRNL CO.11", type text}, {"JRNL CO.12", type text}, {"JRNL CO.13", type text}, {"JRNL CO.14", type text}, {"JRNL CO.15", type text}, {"JRNL CO.16", type text}, {"JRNL CO.17", type text}, {"JRNL CO.18", type text}, {"JRNL CO.19", type text}, {"JRNL CO.20", type text}, {"JRNL CO.21", type text}, {"JRNL CO.22", type text}, {"JRNL CO.23", type text}, {"JRNL CO.24", type text}, {"JRNL CO.25", type text}}),
#"Split Column by Delimiter3" = Table.SplitColumn(#"Changed Type3","SO",Splitter.SplitTextByDelimiter(" "),{"SO.1", "SO.2", "SO.3", "SO.4", "SO.5", "SO.6", "SO.7", "SO.8", "SO.9", "SO.10", "SO.11", "SO.12", "SO.13", "SO.14", "SO.15", "SO.16", "SO.17", "SO.18", "SO.19", "SO.20", "SO.21", "SO.22", "SO.23", "SO.24", "SO.25"}),
#"Changed Type4" = Table.TransformColumnTypes(#"Split Column by Delimiter3",{{"SO.1", Int64.Type}, {"SO.2", type text}, {"SO.3", type text}, {"SO.4", type text}, {"SO.5", type text}, {"SO.6", type text}, {"SO.7", type text}, {"SO.8", type text}, {"SO.9", type text}, {"SO.10", type text}, {"SO.11", type text}, {"SO.12", type text}, {"SO.13", type text}, {"SO.14", type text}, {"SO.15", type text}, {"SO.16", type text}, {"SO.17", type text}, {"SO.18", type text}, {"SO.19", type text}, {"SO.20", type text}, {"SO.21", type text}, {"SO.22", type text}, {"SO.23", type text}, {"SO.24", type text}, {"SO.25", type text}}),
#"Split Column by Delimiter4" = Table.SplitColumn(#"Changed Type4","ODATE",Splitter.SplitTextByDelimiter(" "),{"ODATE.1", "ODATE.2", "ODATE.3", "ODATE.4", "ODATE.5", "ODATE.6", "ODATE.7", "ODATE.8", "ODATE.9", "ODATE.10", "ODATE.11", "ODATE.12", "ODATE.13", "ODATE.14", "ODATE.15", "ODATE.16", "ODATE.17", "ODATE.18", "ODATE.19", "ODATE.20", "ODATE.21", "ODATE.22", "ODATE.23", "ODATE.24", "ODATE.25"}),
#"Changed Type5" = Table.TransformColumnTypes(#"Split Column by Delimiter4",{{"ODATE.1", type date}, {"ODATE.2", type text}, {"ODATE.3", type text}, {"ODATE.4", type text}, {"ODATE.5", type text}, {"ODATE.6", type text}, {"ODATE.7", type text}, {"ODATE.8", type text}, {"ODATE.9", type text}, {"ODATE.10", type text}, {"ODATE.11", type text}, {"ODATE.12", type text}, {"ODATE.13", type text}, {"ODATE.14", type text}, {"ODATE.15", type text}, {"ODATE.16", type text}, {"ODATE.17", type text}, {"ODATE.18", type text}, {"ODATE.19", type text}, {"ODATE.20", type text}, {"ODATE.21", type text}, {"ODATE.22", type text}, {"ODATE.23", type text}, {"ODATE.24", type text}, {"ODATE.25", type text}}),
#"Split Column by Delimiter5" = Table.SplitColumn(#"Changed Type5","JRNL TYPE",Splitter.SplitTextByDelimiter(" "),{"JRNL TYPE.1", "JRNL TYPE.2", "JRNL TYPE.3", "JRNL TYPE.4", "JRNL TYPE.5", "JRNL TYPE.6", "JRNL TYPE.7", "JRNL TYPE.8", "JRNL TYPE.9", "JRNL TYPE.10", "JRNL TYPE.11", "JRNL TYPE.12", "JRNL TYPE.13", "JRNL TYPE.14", "JRNL TYPE.15", "JRNL TYPE.16", "JRNL TYPE.17", "JRNL TYPE.18", "JRNL TYPE.19", "JRNL TYPE.20", "JRNL TYPE.21", "JRNL TYPE.22", "JRNL TYPE.23", "JRNL TYPE.24", "JRNL TYPE.25"}),
#"Changed Type6" = Table.TransformColumnTypes(#"Split Column by Delimiter5",{{"JRNL TYPE.1", type text}, {"JRNL TYPE.2", type text}, {"JRNL TYPE.3", type text}, {"JRNL TYPE.4", type text}, {"JRNL TYPE.5", type text}, {"JRNL TYPE.6", type text}, {"JRNL TYPE.7", type text}, {"JRNL TYPE.8", type text}, {"JRNL TYPE.9", type text}, {"JRNL TYPE.10", type text}, {"JRNL TYPE.11", type text}, {"JRNL TYPE.12", type text}, {"JRNL TYPE.13", type text}, {"JRNL TYPE.14", type text}, {"JRNL TYPE.15", type text}, {"JRNL TYPE.16", type text}, {"JRNL TYPE.17", type text}, {"JRNL TYPE.18", type text}, {"JRNL TYPE.19", type text}, {"JRNL TYPE.20", type text}, {"JRNL TYPE.21", type text}, {"JRNL TYPE.22", type text}, {"JRNL TYPE.23", type text}, {"JRNL TYPE.24", type text}, {"JRNL TYPE.25", type text}}),
#"Split Column by Delimiter6" = Table.SplitColumn(#"Changed Type6","DATE",Splitter.SplitTextByDelimiter(" "),{"DATE.1", "DATE.2", "DATE.3", "DATE.4", "DATE.5", "DATE.6", "DATE.7", "DATE.8", "DATE.9", "DATE.10", "DATE.11", "DATE.12", "DATE.13", "DATE.14", "DATE.15", "DATE.16", "DATE.17", "DATE.18", "DATE.19", "DATE.20", "DATE.21", "DATE.22", "DATE.23", "DATE.24", "DATE.25"}),
#"Changed Type7" = Table.TransformColumnTypes(#"Split Column by Delimiter6",{{"DATE.1", type date}, {"DATE.2", type text}, {"DATE.3", type text}, {"DATE.4", type text}, {"DATE.5", type text}, {"DATE.6", type text}, {"DATE.7", type text}, {"DATE.8", type text}, {"DATE.9", type text}, {"DATE.10", type text}, {"DATE.11", type text}, {"DATE.12", type text}, {"DATE.13", type text}, {"DATE.14", type text}, {"DATE.15", type text}, {"DATE.16", type text}, {"DATE.17", type text}, {"DATE.18", type text}, {"DATE.19", type text}, {"DATE.20", type text}, {"DATE.21", type text}, {"DATE.22", type text}, {"DATE.23", type text}, {"DATE.24", type text}, {"DATE.25", type text}}),
#"Reordered Columns" = Table.ReorderColumns(#"Changed Type7",{"AGME.CO.ID", "SCHED", "NAME", "NAME_1", "N-CD", "SALE CO.1", "SALE CO.2", "SALE CO.3", "SALE CO.4", "SALE CO.5", "SALE CO.6", "SALE CO.7", "SALE CO.8", "SALE CO.9", "SALE CO.10", "SALE CO.11", "SALE CO.12", "SALE CO.13", "SALE CO.14", "SALE CO.15", "SALE CO.16", "SALE CO.17", "SALE CO.18", "SALE CO.19", "SALE CO.20", "SALE CO.21", "SALE CO.22", "SALE CO.23", "SALE CO.24", "SALE CO.25", "REFER.1", "REFER.2", "REFER.3", "REFER.4", "REFER.5", "REFER.6", "REFER.7", "REFER.8", "REFER.9", "REFER.10", "REFER.11", "REFER.12", "REFER.13", "REFER.14", "REFER.15", "REFER.16", "REFER.17", "REFER.18", "REFER.19", "REFER.20", "REFER.21", "REFER.22", "REFER.23", "REFER.24", "REFER.25", "JRNL CO.1", "JRNL CO.2", "JRNL CO.3", "JRNL CO.4", "JRNL CO.5", "JRNL CO.6", "JRNL CO.7", "JRNL CO.8", "JRNL CO.9", "JRNL CO.10", "JRNL CO.11", "JRNL CO.12", "JRNL CO.13", "JRNL CO.14", "JRNL CO.15", "JRNL CO.16", "JRNL CO.17", "JRNL CO.18", "JRNL CO.19", "JRNL CO.20", "JRNL CO.21", "JRNL CO.22", "JRNL CO.23", "JRNL CO.24", "JRNL CO.25", "SO.1", "SO.2", "SO.3", "SO.4", "SO.5", "SO.6", "SO.7", "SO.8", "SO.9", "SO.10", "SO.11", "SO.12", "SO.13", "SO.14", "SO.15", "SO.16", "SO.17", "SO.18", "SO.19", "SO.20", "SO.21", "SO.22", "SO.23", "SO.24", "SO.25", "ODATE.1", "ODATE.2", "ODATE.3", "ODATE.4", "ODATE.5", "ODATE.6", "ODATE.7", "ODATE.8", "ODATE.9", "ODATE.10", "ODATE.11", "ODATE.12", "ODATE.13", "ODATE.14", "ODATE.15", "ODATE.16", "ODATE.17", "ODATE.18", "ODATE.19", "ODATE.20", "ODATE.21", "ODATE.22", "ODATE.23", "ODATE.24", "ODATE.25", "JRNL TYPE.1", "JRNL TYPE.2", "JRNL TYPE.3", "JRNL TYPE.4", "JRNL TYPE.5", "JRNL TYPE.6", "JRNL TYPE.7", "JRNL TYPE.8", "JRNL TYPE.9", "JRNL TYPE.10", "JRNL TYPE.11", "JRNL TYPE.12", "JRNL TYPE.13", "JRNL TYPE.14", "JRNL TYPE.15", "JRNL TYPE.16", "JRNL TYPE.17", "JRNL TYPE.18", "JRNL TYPE.19", "JRNL TYPE.20", "JRNL TYPE.21", "JRNL TYPE.22", "JRNL TYPE.23", "JRNL TYPE.24", "JRNL TYPE.25", "CONTROL", "DATE.1", "DATE.2", "DATE.3", "DATE.4", "DATE.5", "DATE.6", "DATE.7", "DATE.8", "DATE.9", "DATE.10", "DATE.11", "DATE.12", "DATE.13", "DATE.14", "DATE.15", "DATE.16", "DATE.17", "DATE.18", "DATE.19", "DATE.20", "DATE.21", "DATE.22", "DATE.23", "DATE.24", "DATE.25", "DATE_2", "A-ACCT", "NAME_3", "CONTROL_4", "CONTROL2"}),
#"Removed Columns1" = Table.RemoveColumns(#"Reordered Columns",{"DATE_2", "A-ACCT", "NAME_3", "CONTROL_4", "CONTROL2"}),
#"Reordered Columns1" = Table.ReorderColumns(#"Removed Columns1",{"AGME.CO.ID", "SCHED", "NAME", "NAME_1", "N-CD", "CONTROL", "SALE CO.1", "SALE CO.2", "SALE CO.3", "SALE CO.4", "SALE CO.5", "SALE CO.6", "SALE CO.7", "SALE CO.8", "SALE CO.9", "SALE CO.10", "SALE CO.11", "SALE CO.12", "SALE CO.13", "SALE CO.14", "SALE CO.15", "SALE CO.16", "SALE CO.17", "SALE CO.18", "SALE CO.19", "SALE CO.20", "SALE CO.21", "SALE CO.22", "SALE CO.23", "SALE CO.24", "SALE CO.25", "REFER.1", "REFER.2", "REFER.3", "REFER.4", "REFER.5", "REFER.6", "REFER.7", "REFER.8", "REFER.9", "REFER.10", "REFER.11", "REFER.12", "REFER.13", "REFER.14", "REFER.15", "REFER.16", "REFER.17", "REFER.18", "REFER.19", "REFER.20", "REFER.21", "REFER.22", "REFER.23", "REFER.24", "REFER.25", "JRNL CO.1", "JRNL CO.2", "JRNL CO.3", "JRNL CO.4", "JRNL CO.5", "JRNL CO.6", "JRNL CO.7", "JRNL CO.8", "JRNL CO.9", "JRNL CO.10", "JRNL CO.11", "JRNL CO.12", "JRNL CO.13", "JRNL CO.14", "JRNL CO.15", "JRNL CO.16", "JRNL CO.17", "JRNL CO.18", "JRNL CO.19", "JRNL CO.20", "JRNL CO.21", "JRNL CO.22", "JRNL CO.23", "JRNL CO.24", "JRNL CO.25", "SO.1", "SO.2", "SO.3", "SO.4", "SO.5", "SO.6", "SO.7", "SO.8", "SO.9", "SO.10", "SO.11", "SO.12", "SO.13", "SO.14", "SO.15", "SO.16", "SO.17", "SO.18", "SO.19", "SO.20", "SO.21", "SO.22", "SO.23", "SO.24", "SO.25", "ODATE.1", "ODATE.2", "ODATE.3", "ODATE.4", "ODATE.5", "ODATE.6", "ODATE.7", "ODATE.8", "ODATE.9", "ODATE.10", "ODATE.11", "ODATE.12", "ODATE.13", "ODATE.14", "ODATE.15", "ODATE.16", "ODATE.17", "ODATE.18", "ODATE.19", "ODATE.20", "ODATE.21", "ODATE.22", "ODATE.23", "ODATE.24", "ODATE.25", "JRNL TYPE.1", "JRNL TYPE.2", "JRNL TYPE.3", "JRNL TYPE.4", "JRNL TYPE.5", "JRNL TYPE.6", "JRNL TYPE.7", "JRNL TYPE.8", "JRNL TYPE.9", "JRNL TYPE.10", "JRNL TYPE.11", "JRNL TYPE.12", "JRNL TYPE.13", "JRNL TYPE.14", "JRNL TYPE.15", "JRNL TYPE.16", "JRNL TYPE.17", "JRNL TYPE.18", "JRNL TYPE.19", "JRNL TYPE.20", "JRNL TYPE.21", "JRNL TYPE.22", "JRNL TYPE.23", "JRNL TYPE.24", "JRNL TYPE.25", "DATE.1", "DATE.2", "DATE.3", "DATE.4", "DATE.5", "DATE.6", "DATE.7", "DATE.8", "DATE.9", "DATE.10", "DATE.11", "DATE.12", "DATE.13", "DATE.14", "DATE.15", "DATE.16", "DATE.17", "DATE.18", "DATE.19", "DATE.20", "DATE.21", "DATE.22", "DATE.23", "DATE.24", "DATE.25"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Reordered Columns1", {"AGME.CO.ID", "SCHED", "NAME", "NAME_1", "N-CD", "CONTROL"}, "Attribute", "Value"),
#"Replaced Value" = Table.ReplaceValue(#"Unpivoted Columns",".*","",Replacer.ReplaceText,{"Attribute"}),
#"Split Column by Delimiter7" = Table.SplitColumn(#"Replaced Value","Attribute",Splitter.SplitTextByDelimiter("."),{"Attribute.1", "Attribute.2"}),
#"Changed Type8" = Table.TransformColumnTypes(#"Split Column by Delimiter7",{{"Attribute.1", type text}, {"Attribute.2", Int64.Type}}),
#"Removed Columns2" = Table.RemoveColumns(#"Changed Type8",{"Attribute.2"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Columns2",{{"Attribute.1", "Type"}}),
#"Removed Columns3" = Table.RemoveColumns(#"Renamed Columns",{"SCHED", "NAME", "NAME_1", "N-CD"}),
#"Renamed Columns1" = Table.RenameColumns(#"Removed Columns3",{{"AGME.CO.ID", "companyid"}, {"CONTROL", "control"}}),
#"Reordered Columns2" = Table.ReorderColumns(#"Renamed Columns1",{"companyid", "control", "Type", "Value"})
in
#"Reordered Columns2"
I need to do one more transform which is a Pivot like so:
#"Pivoted Column" = Table.Pivot(#"Reordered Columns2", List.Distinct(#"Reordered Columns2"[Type]), "Type", "Value")
in
#"Pivoted Column"
and I only get 326 rows. Because it is a pivot without aggregation I get errors but even if I remove errors I still don't get the complete result set.
Any ideas?
Hello again and thanks for your help. I used List.First because the value column is text however I only got one row returned so I thought I would give you all my code (because I changed some things to make it simpler) and some pictures. Here's my code:
let
Source = Csv.Document(File.Contents("C:\Users\jdonnelly\Desktop\2014 JW Annual Review\ALL STORES 01-07-15 AR OPEN.csv"),null,",",null,1252),
#"First Row as Header" = Table.PromoteHeaders(Source),
#"Changed Type" = Table.TransformColumnTypes(#"First Row as Header",{{"AR-OPEN", type text}, {"SALE CO", type text}, {"NAME", type text}, {"N-CD", Int64.Type}, {"REFER", type text}, {"JRNL CO", type text}, {"SCHED", type text}, {"AGME.CO.ID", Int64.Type}, {"SO", type text}, {"ODATE", type text}, {"JRNL TYPE", type text}, {"CONTROL", type text}, {"DATE", type text}, {"A-ACCT", type text}, {"CONTROL2", type text}}),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"AR-OPEN", "NAME", "NAME_1", "N-CD", "JRNL CO", "SCHED", "AGME.CO.ID", "JRNL TYPE", "DATE", "DATE_2", "A-ACCT", "NAME_3", "CONTROL_4", "CONTROL2"}),
#"Reordered Columns" = Table.ReorderColumns(#"Removed Columns",{"CONTROL", "SALE CO", "REFER", "SO", "ODATE"}),
#"Split Column by Delimiter" = Table.SplitColumn(#"Reordered Columns","ODATE",Splitter.SplitTextByDelimiter(" "),{"ODATE.1", "ODATE.2", "ODATE.3", "ODATE.4", "ODATE.5", "ODATE.6", "ODATE.7", "ODATE.8", "ODATE.9", "ODATE.10", "ODATE.11", "ODATE.12", "ODATE.13", "ODATE.14", "ODATE.15", "ODATE.16", "ODATE.17", "ODATE.18", "ODATE.19"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"ODATE.1", type date}, {"ODATE.2", type text}, {"ODATE.3", type text}, {"ODATE.4", type text}, {"ODATE.5", type text}, {"ODATE.6", type text}, {"ODATE.7", type text}, {"ODATE.8", type text}, {"ODATE.9", type text}, {"ODATE.10", type text}, {"ODATE.11", type text}, {"ODATE.12", type text}, {"ODATE.13", type text}, {"ODATE.14", type text}, {"ODATE.15", type text}, {"ODATE.16", type text}, {"ODATE.17", type text}, {"ODATE.18", type text}, {"ODATE.19", type text}}),
#"Removed Columns1" = Table.RemoveColumns(#"Changed Type1",{"ODATE.2", "ODATE.3", "ODATE.5", "ODATE.6", "ODATE.8", "ODATE.9", "ODATE.11", "ODATE.12", "ODATE.14", "ODATE.15", "ODATE.17", "ODATE.18"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Removed Columns1",{{"ODATE.1", type date}, {"ODATE.4", type date}, {"ODATE.7", type date}, {"ODATE.10", type date}, {"ODATE.13", type date}, {"ODATE.16", type date}, {"ODATE.19", type date}}),
#"Split Column by Delimiter1" = Table.SplitColumn(#"Changed Type2","SO",Splitter.SplitTextByDelimiter(" "),{"SO.1", "SO.2", "SO.3", "SO.4", "SO.5", "SO.6", "SO.7", "SO.8", "SO.9", "SO.10", "SO.11", "SO.12", "SO.13", "SO.14", "SO.15", "SO.16", "SO.17", "SO.18", "SO.19"}),
#"Changed Type3" = Table.TransformColumnTypes(#"Split Column by Delimiter1",{{"SO.1", Int64.Type}, {"SO.2", type text}, {"SO.3", type text}, {"SO.4", type text}, {"SO.5", type text}, {"SO.6", type text}, {"SO.7", type text}, {"SO.8", type text}, {"SO.9", type text}, {"SO.10", type text}, {"SO.11", type text}, {"SO.12", type text}, {"SO.13", type text}, {"SO.14", type text}, {"SO.15", type text}, {"SO.16", type text}, {"SO.17", type text}, {"SO.18", type text}, {"SO.19", type text}}),
#"Removed Columns2" = Table.RemoveColumns(#"Changed Type3",{"SO.2", "SO.3", "SO.5", "SO.6", "SO.8", "SO.9", "SO.11", "SO.12", "SO.14", "SO.15", "SO.17", "SO.18"}),
#"Split Column by Delimiter2" = Table.SplitColumn(#"Removed Columns2","REFER",Splitter.SplitTextByDelimiter(" "),{"REFER.1", "REFER.2", "REFER.3", "REFER.4", "REFER.5", "REFER.6", "REFER.7", "REFER.8", "REFER.9", "REFER.10", "REFER.11", "REFER.12", "REFER.13", "REFER.14", "REFER.15", "REFER.16", "REFER.17", "REFER.18", "REFER.19"}),
#"Changed Type4" = Table.TransformColumnTypes(#"Split Column by Delimiter2",{{"REFER.1", type text}, {"REFER.2", type text}, {"REFER.3", type text}, {"REFER.4", type text}, {"REFER.5", type text}, {"REFER.6", type text}, {"REFER.7", type text}, {"REFER.8", type text}, {"REFER.9", type text}, {"REFER.10", type text}, {"REFER.11", type text}, {"REFER.12", type text}, {"REFER.13", type text}, {"REFER.14", type text}, {"REFER.15", type text}, {"REFER.16", type text}, {"REFER.17", type text}, {"REFER.18", type text}, {"REFER.19", type text}}),
#"Removed Columns3" = Table.RemoveColumns(#"Changed Type4",{"REFER.2", "REFER.3", "REFER.5", "REFER.6", "REFER.8", "REFER.9", "REFER.11", "REFER.12", "REFER.14", "REFER.15", "REFER.17", "REFER.18"}),
#"Split Column by Delimiter3" = Table.SplitColumn(#"Removed Columns3","SALE CO",Splitter.SplitTextByDelimiter(" "),{"SALE CO.1", "SALE CO.2", "SALE CO.3", "SALE CO.4", "SALE CO.5", "SALE CO.6", "SALE CO.7", "SALE CO.8", "SALE CO.9", "SALE CO.10", "SALE CO.11", "SALE CO.12", "SALE CO.13", "SALE CO.14", "SALE CO.15", "SALE CO.16", "SALE CO.17", "SALE CO.18", "SALE CO.19"}),
#"Changed Type5" = Table.TransformColumnTypes(#"Split Column by Delimiter3",{{"SALE CO.1", Int64.Type}, {"SALE CO.2", type text}, {"SALE CO.3", type text}, {"SALE CO.4", type text}, {"SALE CO.5", type text}, {"SALE CO.6", type text}, {"SALE CO.7", type text}, {"SALE CO.8", type text}, {"SALE CO.9", type text}, {"SALE CO.10", type text}, {"SALE CO.11", type text}, {"SALE CO.12", type text}, {"SALE CO.13", type text}, {"SALE CO.14", type text}, {"SALE CO.15", type text}, {"SALE CO.16", type text}, {"SALE CO.17", type text}, {"SALE CO.18", type text}, {"SALE CO.19", type text}}),
#"Removed Columns4" = Table.RemoveColumns(#"Changed Type5",{"SALE CO.2", "SALE CO.3", "SALE CO.5", "SALE CO.6", "SALE CO.8", "SALE CO.9", "SALE CO.11", "SALE CO.12", "SALE CO.14", "SALE CO.15", "SALE CO.17", "SALE CO.18"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Removed Columns4", {}, "Attribute", "Value"),
#"Changed Type6" = Table.TransformColumnTypes(#"Unpivoted Columns",{{"Value", type text}}),
#"Split Column by Delimiter4" = Table.SplitColumn(#"Changed Type6","Attribute",Splitter.SplitTextByDelimiter("."),{"Attribute.1", "Attribute.2"}),
#"Changed Type7" = Table.TransformColumnTypes(#"Split Column by Delimiter4",{{"Attribute.1", type text}, {"Attribute.2", type text}}),
#"Removed Columns5" = Table.RemoveColumns(#"Changed Type7",{"Attribute.2"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Columns5",{{"Attribute.1", "Attribute"}})
#"Pivoted Column" = Table.Pivot(#"Renamed Columns", List.Distinct(#"Renamed Columns"[Attribute]), "Attribute", "Value", List.First)
in
#"Pivoted Column"
Before the Table.Pivot my View is as follows:
It runs down to row 18,200 or so. I chose a smaller file to make things simpler.
After the Table.Pivot as written above this is what my view shows:
So I received only one row. Is there a way to make it all rows?
(For background: the CSV file is from a PICK database that has multi-variate fields where all but one of the fields has multiple values correlated with multiple values in other fields.)
Thanks again
John Donnelly
Similar Messages
-
Loading a CSV file and accessing the variables
Hi guys,
I'm new to AS3 and dealt with AS2 before (just getting the grasp when the change it).
Is it possible in AS3 to load an excel .csv file into Flash using the URLLoader (or ???) and the data as variables?
I can get the .csv to load and trace the values (cell1,cell2,cell3....) but I'm not sure how to collect the data and place it into variables.
Can I just create an array and access it like so.... myArray[0], myArray[1]? If so, I'm not sure why it's not working.
I must be on the completely wrong path. Here's what I have so far....
var loader:URLLoader = new URLLoader();
loader.dataFormat = URLLoaderDataFormat.VARIABLES;
loader.addEventListener(Event.COMPLETE, dataLoaded);
var request:URLRequest = new URLRequest("population.csv");
loader.load(request);
function dataLoaded(evt:Event):void {
var myData:Array = new Array(loader.data);
trace(myData[i]);
Thanks for any help,
Skyjust load your csv file and use the flash string methods to allocate those values to an array:
var myDate:Array = loader.data.split(","); -
Error while loading a CSV file
Haii,
While loading a csv file in Open Script I'm facing with the foll: error.
Error loading the CSV file Caused by: java.io.SocketException occurred.
Can anyone please help me sort it out.
Thank YouHi,
Are you creating a table and loading it in Openscript??
If so, can you show the screen shot.
One more information , you can also change the Excel sheet into a CSV file and you can add it as a databank in the Tree view of the Openscript.
Thanks and Regards,
Nishanth Soundararajan. -
Example for loading a csv file into diadem from a labview application
Hi everyone, i'm using labview 8.2 and DIAdem 10.1.
I've been searching in NI example finder but I had no luck so far.
I have already downloaded the labview connectivity VIs.
Can anyone provide a example that can help me loading a csv file into diadem from a labview application?
ThanksHi Alexandre.
I attach an example for you.
Best Regards.
Message Edité par R_Duval le 01-15-2008 02:44 PM
Romain D.
National Instruments France
#adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
NIDays 2010 : Conférence mondiale de l'instrumentation virtuelle
>>Détails et Inscription<<
Attachments:
Classeur1.csv 1 KB
Load CSV to Diadem.vi 15 KB -
Loading a CSV file into a table
HTML DB Data Workshop has a feature to load a CSV file and create a table based on it.
I use a simplified version of this feature in my applications.
See a quick demo at
http://htmldb.oracle.com/pls/otn/f?p=38131:1
Create a small csv file like
col1,col2,col3
varchar2(10),number,"number(10,2)"
cat,2,3.2
dog,99,10.4
The first line in the file are column names
Second line are their datatypes.
Save the file as c:\test.csv
Click the Browse button, search for this file.
Click the Submit button to upload and parse the file.
You can browse the file contents.
If you like what you see, type in a table name and click the Create Table button to create a table.
Let me know if anyone is interested and I can post the code.
Hope this helps.
Message was edited by:
Vikas
Message was edited by:
VikasHi Vikas,
I tried to import your COOL application into html db at http://htmldb.oracle.com and got the error:
"Expecting p_company or wwv_flow_company cookie to contain security group id of application owner.
Error ERR-7621 Could not determine workspace for application (:) on application accept.
OK "
The SAME error I get when trying to import Scott's Photo Catalog at http://htmldb.oracle.com/pls/otn/f?p=18326:7:6703988766288646534::::P7_ID:2242.
What should I change in the exported script in addition to change your workspace schema VIKASA or Scott's STUDIO_SHOWCASE to my NICED68?
Thanks!
DC -
Loading a CSV file into a table as in dataworkshop
Data Workshop has a feature to load a CSV file and create a table based on it, similarly i want to create it in my application.
i have gone through the forum http://forums.oracle.com/forums/thread.jspa?threadID=334988&start=60&tstart=0
but could not download all the files(application , HTMLDB_TOOLS package and the PAGE_SENTRY function) couldnt find the PAGE_SENTRY function.
AND when i open this link http://apex.oracle.com/pls/apex/f?p=27746
I couldn't run the application. I provided a CSV file and when I click SUBMIT, I got the error:
ORA-06550: line 1, column 7: PLS-00201: identifier 'HTMLDB_TOOLS.PARSE_FILE' must be declared
tried it in apex.oracle.com host as given in the previous post.
any help pls..,
any other method to load data into tables.., (similar to dataworkshop)hi jari,
I'm using the HTMDB_TOOLS to parse my csv files,It works well for creating a report, but if i want to create a new table and upload the data giving error *"missing right parenthesis"*
I've been looking through the package body and i think there is some problem in the code,
IF (p_table_name is not null)
THEN
BEGIN
execute immediate 'drop table '||p_table_name;
EXCEPTION
WHEN OTHERS THEN NULL;
END;
l_ddl := 'create table '||p_table_name||' '||v(p_ddl_item);
htmldb_util.set_session_state('P149_DEBUG',l_ddl);
execute immediate l_ddl;
l_ddl := 'insert into '||p_table_name||' '||
'select '||v(p_columns_item)||' '||
'from htmldb_collections '||
'where seq_id > 1 and collection_name='''||p_collection_name||'''';
htmldb_util.set_session_state('P149_DEBUG',v('P149_DEBUG')||'/'||l_ddl);
execute immediate l_ddl;
RETURN;
END IF;it is droping table but not creating new table.
P149_DEBUG contains create table EMP_D (Emp ID number(10),Name varchar2(20),Type varchar2(20),Join Date varchar2(20),Location varchar2(20))and if i comment the
BEGIN
execute immediate 'drop table '||p_table_name;
EXCEPTION
WHEN OTHERS THEN NULL;
END;
l_ddl := 'create table '||p_table_name||' '||v(p_ddl_item);
htmldb_util.set_session_state('P149_DEBUG',l_ddl);
execute immediate l_ddl;then it is working well, i.e i enter the exsisting table name then i works fine, inserting the rows.
but unable to create new table
can you pls help to fix it.
Regards,
Little Foot -
Issue while loading a csv file using sql*loader...
Hi,
I am loading a csv file using sql*loader.
On the number columns where there is data populated in them, decimal number/integers, the row errors out on the error -
ORA-01722: invalid number
I tried checking the value picking from the excel,
and found the chr(13),chr(32),chr(10) values characters on the value.
ex: select length('0.21') from dual is giving a value of 7.
When i checked each character as
select ascii(substr('0.21',5,1) from dual is returning a value 9...etc.
I tried the following command....
"to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
to remove all the non-number special characters. But still facing the error.
Please let me know, any solution for this error.
Thanks in advance.
Kirancontrol file:
OPTIONS (ROWS=1, ERRORS=10000)
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE '$Xx_TOP/bin/ITEMS.csv'
APPEND INTO TABLE XXINF.ITEMS_STAGE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
ItemNum "trim(replace(replace(:ItemNum,chr(9),''),chr(13),''))",
cross_ref_old_item_num "trim(replace(replace(:cross_ref_old_item_num,chr(9),''),chr(13),''))",
Mas_description "trim(replace(replace(:Mas_description,chr(9),''),chr(13),''))",
Mas_long_description "trim(replace(replace(:Mas_long_description,chr(9),''),chr(13),''))",
Org_description "trim(replace(replace(:Org_description,chr(9),''),chr(13),''))",
Org_long_description "trim(replace(replace(:Org_long_description,chr(9),''),chr(13),''))",
user_item_type "trim(replace(replace(:user_item_type,chr(9),''),chr(13),''))",
organization_code "trim(replace(replace(:organization_code,chr(9),''),chr(13),''))",
primary_uom_code "trim(replace(replace(:primary_uom_code,chr(9),''),chr(13),''))",
inv_default_item_status "trim(replace(replace(:inv_default_item_status,chr(9),''),chr(13),''))",
inventory_item_flag "trim(replace(replace(:inventory_item_flag,chr(9),''),chr(13),''))",
stock_enabled_flag "trim(replace(replace(:stock_enabled_flag,chr(9),''),chr(13),''))",
mtl_transactions_enabled_flag "trim(replace(replace(:mtl_transactions_enabled_flag,chr(9),''),chr(13),''))",
revision_qty_control_code "trim(replace(replace(:revision_qty_control_code,chr(9),''),chr(13),''))",
reservable_type "trim(replace(replace(:reservable_type,chr(9),''),chr(13),''))",
check_shortages_flag "trim(replace(replace(:check_shortages_flag,chr(9),''),chr(13),''))",
shelf_life_code "trim(replace(replace(replace(replace(:shelf_life_code,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
shelf_life_days "trim(replace(replace(replace(replace(:shelf_life_days,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
lot_control_code "trim(replace(replace(:lot_control_code,chr(9),''),chr(13),''))",
auto_lot_alpha_prefix "trim(replace(replace(:auto_lot_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_lot_number "trim(replace(replace(:start_auto_lot_number,chr(9),''),chr(13),''))",
negative_measurement_error "trim(replace(replace(replace(replace(:negative_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
positive_measurement_error "trim(replace(replace(replace(replace(:positive_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
serial_number_control_code "trim(replace(replace(:serial_number_control_code,chr(9),''),chr(13),''))",
auto_serial_alpha_prefix "trim(replace(replace(:auto_serial_alpha_prefix,chr(9),''),chr(13),''))",
start_auto_serial_number "trim(replace(replace(:start_auto_serial_number,chr(9),''),chr(13),''))",
location_control_code "trim(replace(replace(:location_control_code,chr(9),''),chr(13),''))",
restrict_subinventories_code "trim(replace(replace(:restrict_subinventories_code,chr(9),''),chr(13),''))",
restrict_locators_code "trim(replace(replace(:restrict_locators_code,chr(9),''),chr(13),''))",
bom_enabled_flag "trim(replace(replace(:bom_enabled_flag,chr(9),''),chr(13),''))",
costing_enabled_flag "trim(replace(replace(:costing_enabled_flag,chr(9),''),chr(13),''))",
inventory_asset_flag "trim(replace(replace(:inventory_asset_flag,chr(9),''),chr(13),''))",
default_include_in_rollup_flag "trim(replace(replace(:default_include_in_rollup_flag,chr(9),''),chr(13),''))",
cost_of_goods_sold_account "trim(replace(replace(:cost_of_goods_sold_account,chr(9),''),chr(13),''))",
std_lot_size "trim(replace(replace(replace(replace(:std_lot_size,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
sales_account "trim(replace(replace(:sales_account,chr(9),''),chr(13),''))",
purchasing_item_flag "trim(replace(replace(:purchasing_item_flag,chr(9),''),chr(13),''))",
purchasing_enabled_flag "trim(replace(replace(:purchasing_enabled_flag,chr(9),''),chr(13),''))",
must_use_approved_vendor_flag "trim(replace(replace(:must_use_approved_vendor_flag,chr(9),''),chr(13),''))",
allow_item_desc_update_flag "trim(replace(replace(:allow_item_desc_update_flag,chr(9),''),chr(13),''))",
rfq_required_flag "trim(replace(replace(:rfq_required_flag,chr(9),''),chr(13),''))",
buyer_name "trim(replace(replace(:buyer_name,chr(9),''),chr(13),''))",
list_price_per_unit "trim(replace(replace(replace(replace(:list_price_per_unit,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
taxable_flag "trim(replace(replace(:taxable_flag,chr(9),''),chr(13),''))",
purchasing_tax_code "trim(replace(replace(:purchasing_tax_code,chr(9),''),chr(13),''))",
receipt_required_flag "trim(replace(replace(:receipt_required_flag,chr(9),''),chr(13),''))",
inspection_required_flag "trim(replace(replace(:inspection_required_flag,chr(9),''),chr(13),''))",
price_tolerance_percent "trim(replace(replace(replace(replace(:price_tolerance_percent,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
expense_account "trim(replace(replace(:expense_account,chr(9),''),chr(13),''))",
allow_substitute_receipts_flag "trim(replace(replace(:allow_substitute_receipts_flag,chr(9),''),chr(13),''))",
allow_unordered_receipts_flag "trim(replace(replace(:allow_unordered_receipts_flag,chr(9),''),chr(13),''))",
receiving_routing_code "trim(replace(replace(:receiving_routing_code,chr(9),''),chr(13),''))",
inventory_planning_code "trim(replace(replace(:inventory_planning_code,chr(9),''),chr(13),''))",
min_minmax_quantity "trim(replace(replace(replace(replace(:min_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
max_minmax_quantity "trim(replace(replace(replace(replace(:max_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
planning_make_buy_code "trim(replace(replace(:planning_make_buy_code,chr(9),''),chr(13),''))",
source_type "trim(replace(replace(:source_type,chr(9),''),chr(13),''))",
mrp_safety_stock_code "trim(replace(replace(:mrp_safety_stock_code,chr(9),''),chr(13),''))",
material_cost "trim(replace(replace(:material_cost,chr(9),''),chr(13),''))",
mrp_planning_code "trim(replace(replace(:mrp_planning_code,chr(9),''),chr(13),''))",
customer_order_enabled_flag "trim(replace(replace(:customer_order_enabled_flag,chr(9),''),chr(13),''))",
customer_order_flag "trim(replace(replace(:customer_order_flag,chr(9),''),chr(13),''))",
shippable_item_flag "trim(replace(replace(:shippable_item_flag,chr(9),''),chr(13),''))",
internal_order_flag "trim(replace(replace(:internal_order_flag,chr(9),''),chr(13),''))",
internal_order_enabled_flag "trim(replace(replace(:internal_order_enabled_flag,chr(9),''),chr(13),''))",
invoice_enabled_flag "trim(replace(replace(:invoice_enabled_flag,chr(9),''),chr(13),''))",
invoiceable_item_flag "trim(replace(replace(:invoiceable_item_flag,chr(9),''),chr(13),''))",
cross_ref_ean_code "trim(replace(replace(:cross_ref_ean_code,chr(9),''),chr(13),''))",
category_set_intrastat "trim(replace(replace(:category_set_intrastat,chr(9),''),chr(13),''))",
CustomCode "trim(replace(replace(:CustomCode,chr(9),''),chr(13),''))",
net_weight "trim(replace(replace(replace(replace(:net_weight,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
production_speed "trim(replace(replace(:production_speed,chr(9),''),chr(13),''))",
LABEL "trim(replace(replace(:LABEL,chr(9),''),chr(13),''))",
comment1_org_level "trim(replace(replace(:comment1_org_level,chr(9),''),chr(13),''))",
comment2_org_level "trim(replace(replace(:comment2_org_level,chr(9),''),chr(13),''))",
std_cost_price_scala "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
supply_type "trim(replace(replace(:supply_type,chr(9),''),chr(13),''))",
subinventory_code "trim(replace(replace(:subinventory_code,chr(9),''),chr(13),''))",
preprocessing_lead_time "trim(replace(replace(replace(replace(:preprocessing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
processing_lead_time "trim(replace(replace(replace(replace(:processing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
wip_supply_locator "trim(replace(replace(:wip_supply_locator,chr(9),''),chr(13),''))"
Sample data from csv file.
"9901-0001-35","390000","JMKL16 Pipe bend 16 mm","","JMKL16 Putkikaari 16 mm","","AI","FJE","Ea","","","","","","","","","","","","","","","","","","","","","","","","","21-21100-22200-00000-00000-00-00000-00000","0","21-11100-22110-00000-00000-00-00000-00000","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.1","Pull","AFTER PROD","","","Locator for Production"
The load errors out on especially two columns :
1) std_cost_price_scala
2) list_price_per_unit
both are number columns.
And when there is data being provided on them. It errors out. But, if they are holding null values, the records go through fine.
Message was edited by:
KK28 -
Loading a CSV file to Sql Server
Hi,
How can i load a csv file, which is in a shared location itn osql using bods.
Where can i create the data source pointing to csv fileHi,
Use fileformat to load CSV file into SQL server, below the details
A file format defines a connection to a file. Therefore, you use a file format to connect to source or target data when the data is stored in a file rather than a database table. The object library stores file format templates that you use to define specific file formats as sources and targets in data flows.
To work with file formats, perform the following tasks:
• Create a file format template that defines the structure for a file.
• Create a specific source or target file format in a data flow. The source or target file format is based on a template and specifies connection information such as the file name.
File format objects can describe files of the following types:
• Delimited: Characters such as commas or tabs separate each field.
• Fixed width: You specify the column width.
• SAP transport: Use to define data transport objects in SAP application data flows.
• Unstructured text: Use to read one or more files of unstructured text from a directory.
• Unstructured binary: Use to read one or more binary documents from a directory.
More details, refer the designer guide
http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_designer_en.pdf -
Using SQL*Loader to load a .csv file having multiple CLOBs
Oracle 8.1.5 on Solaris 2.6
I want to use SQL*Loader to load a .CSV file that has 4 inline CLOB columns. I shall attempt to give some information about the problem:
1. The CLOBs are not delimited in the field level and could themselves contain commas.
2. I cannot get the data file in any other format.
Can anybody help me out with this? While loading LOB in predetermined size fields, is there a limit on the size?
TIA.
-MuraliThanks for the article link. The article states "...the loader can load only XMLType tables, not columns." Is this still the case with 10g R2? If so, what is the best way to workaround this problem? I am migrating data from a Sybase table that contains a TEXT column (among others) to an Oracle table that contains an XMLType column. How do you recommend I accomplish this task?
- Ron -
Error loading local CSV file into external table
Hello,
I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file ...\...\...nnnnn.log
Please let me know if I can achieve the same functionality I had with Oracle XP.
(Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
Regards,
M.R.user7047382 wrote:
Hello,
I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file ...\...\...nnnnn.log
Please let me know if I can achieve the same functionality I had with Oracle XP.
(Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
Regards,
M.R.So your database is on a unix box, but the file is on your Windows desktop machine? So .... how is it you are making your file (on your desktop) visible to your database (on a unix box)??????? -
How to load 100 CSV file to DSO / InfoCube
Hi
How to load 100 CSV files of transcational data to DSO / InfoCube?Hi,
you can achieve to write the .bat (batch file program in windows)
.bat program it will convert to xsl to .CSV format.
in process chain you can use the OS COMMAND option while creating process chain.
it will first trigger the bat file then it will trigger the excel files and info package.
now i am also working on same scenario now.
please find the below doc how to create the process chain using os command.
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0c41c52-2638-2e10-0b89-ffa4e2f10ac0?QuickLink=index&…
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/3034bf3d-9dc4-2d10-9e93-d4ba01505ed9?QuickLink=index&…
Thanks,
Phani. -
Hi guys,
I'm new here and just have browsed
some of the related topics here regarding my problem but could not seem
to find anything to help me fix this problem so I decided to post this.
I currently have a saved waveform from an oscilloscope that is quite
big in size(around 700MB, CSV file format) and I want to view this on
my PC using SignalExpress. Unfortunately when I try to load the file
using "Load/Save Signals -> Load From ASCII", I always get the "Not
enough memory to complete this operation" error. How can we view and
analyze large waveform files in SignalExpress? Is there a workaround on
this?
Thanks,
Louie
P.S.>I'm very new to Signal Express and haven't modified any settings on it.Hi Louie,
Are you encountering a read-only message when you tried to save the boot.ini file? If so, you can try this method: right-click on My Computer >> Select "Properties", and go to the "Advanced" tab. Select "Settings", and on the next screen, there is a button called Edit. If you click on Edit you should be able to modify the "/3GB" tag in boot.ini. Are you able to change it in this manner? After reboot, you can reload the file to see if it helps.
To open a file in SignalExpress, a contiguous chunk of memory is required. If SignalExpress cannot find a contiguous memory chunk that is large enough to hold this file, then an error will be generated. This can happen when fragmentation occurs in memory, and fragmentation (memory management) is managed by Windows, so unfortunately this is a limitation that we have.
As an alternative, have you looked at NI DIAdem before? It is a software tool that allows users to manage and analyze large volumes of data, and has some unique memory management method that lets it open and work on large amounts of data. There is an evaluation version which is available for download; you can try it out and see if it is suitable for your application.
Best regards,
Victor
NI ASEAN
Attachments:
Clipboard01.jpg 181 KB -
How to load excel-csv file into oracle database
Hi
I wanted to load excel file with csv extension(named as trial.csv) into
oracle database table(named as dept),I have given below my experiment here.
I am getting the following error,how should I rectify this?
For ID column I have defined as number(5) datatype, in my control file I have defined as interger external(5),where as in the Error log file why the datatype for ID column is comming as character?
1)my oracle database table is
SQL> desc dept;
Name Null? Type
ID NUMBER(5)
DNAME CHAR(20)
2)my data file is(trial.csv)
ID DNAME
11 production
22 purchase
33 inspection
3)my control file is(trial.ctl)
LOAD DATA
INFILE 'trial.csv'
BADFILE 'trial.bad'
DISCARDFILE 'trial.dsc'
APPEND
INTO TABLE dept
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(ID POSITION(1:5) INTEGER EXTERNAL(5)
NULLIF ID=BLANKS,
DNAME POSITION(6:25) CHAR
NULLIF DNAME=BLANKS
3)my syntax on cmd prompt is
c:\>sqlldr scott/tiger@xxx control=trial.ctl direct=true;
Load completed - logical record count 21.
4)my log file error message is
Column Name Position Len Term Encl Datatype
ID 1:5 5 , O(") CHARACTER
NULL if ID = BLANKS
DNAME 6:25 20 , O(") CHARACTER
NULL if DNAME = BLANKS
Record 1: Rejected - Error on table DEPT, column ID.
ORA-01722: invalid number
Record 21: Rejected - Error on table DEPT, column ID.
ORA-01722: invalid number
Table DEPT:
0 Rows successfully loaded.
21 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 21
Total logical records rejected: 21
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 0
Total stream buffers loaded by SQL*Loader load thread: 0
5)Result
SQL> select * from dept;
no rows selected
by
balamuralikrishnan.sHi everyone!
The following are the steps to load a excell sheet to oracle table.i tried to be as simple as possible.
thanks
Step # 1
Prapare a data file (excel sheet) that would be uploaded to oracle table.
"Save As" a excel sheet to ".csv" (comma seperated values) format.
Then save as to " .dat " file.
e.g datafile.bat
1,Category Wise Consumption Summary,Expected Receipts Report
2,Category Wise Receipts Summary,Forecast Detail Report
3,Current Stock List Category Wise,Forecast rule listing
4,Daily Production Report,Freight carrier listing
5,Daily Transactions Report,Inventory Value Report
Step # 2
Prapare a control file that define the data file to be loaded ,columns seperation value,name and type of the table columns to which data is to be loaded.The control file extension should be " .ctl " !
e.g i want to load the data into tasks table ,from the datafile.dat file and having controlfile.ctl control file.
SQL> desc tasks;
Name Null? Type
TASK_ID NOT NULL NUMBER(14)
TASK_NAME VARCHAR2(120)
TASK_CODE VARCHAR2(120)
: controlfile.ctl
LOAD DATA
INFILE 'e:\datafile.dat'
INTO TABLE tasks
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(TASK_ID INTEGER EXTERNAL,
TASK_NAME CHAR,
TASK_CODE CHAR)
The above is the structure for a control file.
Step # 3
the final step is to give the sqlldr command to execute the process.
sqlldr userid=scott/tiger@abc control=e:\controlfile.ctl log=e:\logfile.log
Message was edited by:
user578762 -
Memory problem with loading a csv file and displaying 2 xy graphs
Hi there, i'm having some memory issues with this little program.
What i'm trying to do is reading a .csv file of 215 mb (6 million lines more or less), extracting the x-y values as 1d array and displaying them in 2 xy graphs (vi attacked).
I've noticed that this process eats from 1.6 to 2 gb of ram and the 2 x-y graphs, as soon as they are loaded (2 minutes more or less) are really realy slow to move with the scrollbar.
My question is: Is there a way for use less memory resources and make the graphs move smoother ?
Thanks in advance,
Ierman Gert
Attachments:
read from file test.vi 106 KBHi lerman,
how many datapoints do you need to handle? How many do you display on the graphs?
Some notes:
- Each graph has its own data buffer. So all data wired to the graph will be buffered again in memory. When wiring a (big) 1d array to the graph a copy will be made in memory. And you mentioned 2 graphs...
- load the array in parts: read a number of lines, parse them to arrays as before (maybe using "spreadsheet string to array"?), finally append the parts to build the big array (may lead to memory problems too).
- avoid datacopies when handling big arrays. You can show buffer creation using menu->tools->advanced->show buffer allocation
- use SGL instead of DBL when possible...
Message Edited by GerdW on 05-12-2009 10:02 PM
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Error in date format when I load a CSV file
I am using Oracle G10 XE and I am trying to load data into my database from comma separated files.
When I load the data from a CSV file which has the date with the following format "DD/MM/YYYY", I received the following error "ORA-01843: not a valid month".
I have the NSL_LANG set to AMERICAN. I have tried the following command: "ALTER SESSION SET NLS DATE FORMAT="DD/MM/YYYY" and this does nothing. When I try to run "SELECT SYSDATE "NOW" FROM DUAL;" I get the date in this format "10-NOV-06".
I will appreciate any help about migrating my data with date fields in format DD//MM/YYYY.
Sincerely,
PolonioSee Re: Get error in date when I load a CSV file
Maybe you are looking for
-
Error no permitted payer/payee defined - Message no. F5747
Hi I am facing an issue while posting vendor invoice through FB60. The issue is as follows- I have assigned Permitted payees in the general payment transaction screen in the vendor master data. So When we post an invoice entry through FB60 then a fie
-
New tabs in version 3/6/16 will not open from bar or menu or control t
When I click New Tab on the tab bar, or click control T, a new tab will not open. This was working before I just upgraded to version 3/6/16.
-
Apple TV itunes account no space on onscreen keyboard.
I cannot enter my itunes account for home sharing and itunes because the keyboard does not give an option for a space in itunes account name.
-
Hi, Oracle11g XE OS - windows / linux Other than User data size limitation (11 GB), what are the other major differences between oracle11g XE and Standard edition ? are RAC and Dataguard supported in oracle 11g XE ? Regards
-
If stolen iPhone deactivated, can thieves still access apps, data?
Phone stolen. Reported, deactivated. Question: if thieves crack password, can they access apps, email, Facebook, data, etc?