BCP utility uploads 0 rows to SQL Server Azure

bcp azure_db.dbo.db_table in calendar_dates.txt -f ../../../scripts/format_files/calendar_dates.fmt -F 2 /S tcp:azure_server /U username@servername /P password
When I issue this command through a PowerShell prompt, I get the following response:
starting copy..
Network packet size 4096
Clock 546 ms
0 rows copied.
When I look at the table in Azure, there are no rows are reported.  I even tried removing all entries in the text file leaving just the first two lines, all to no avail.
If I get the input file name wrong or format file name wrong, or even the first  line indicating the SQL Server version in the format file wrong, an appropriate error message is returned from BCP.  With correct parameters, or it seems, no error is
reported, yet nothing happens.  Anyone seen this?

Hi Klaus Nji,
Below are the steps that I used to attempt to repro your situation. I, unfortunately, was not able to successfully repeat the error, or lack thereof, that you received. My intention in posting these steps is that you can use them as a comparison for troubleshooting.
1) Create and populate a table:
CREATE TABLE calendar_dates ([begin] datetime, [end] datetime)
CREATE UNIQUE CLUSTERED INDEX index_name
ON calendar_dates ([begin])
INSERT INTO calendar_dates VALUES (GETUTCDATE(), dateadd(day, 1, GETUTCDATE()))
INSERT INTO calendar_dates VALUES (GETUTCDATE(), dateadd(day, 1, GETUTCDATE()))
INSERT INTO calendar_dates VALUES (GETUTCDATE(), dateadd(day, 1, GETUTCDATE()))
SELECT * FROM calendar_dates
2) BCP the data to your local machine to to get the proper format file
bcp "SELECT * FROM [dbo].[calendar_dates]" queryout C:\<path>\dateout.txt -S <serverName>.database.windows.net -U <userName> -P <password> -d <database>
3) Create a second table to input data
CREATE TABLE calendar_dates_input ([begin] datetime, [end] datetime)
CREATE UNIQUE CLUSTERED INDEX index_name_input
ON calendar_dates_input ([begin])
4) BCP the data into the second table
BCP dbo.calendar_dates_input in C:\<path>\dateout.txt -f C:\<path>\formatDate.txt -S <serverName>.database.windows.net -U <userName> -P <password> -d <database>
5) I receive the following output:
Starting copy...
6 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 344    Average : (17.44 rows per sec.)
A couple of other things to consider is the version of the BCP client that you are running (bcp -v).  For my attempted repro, I am running the following version:
    PS C:\> bcp -v
    BCP - Bulk Copy Program for Microsoft SQL Server.
    Copyright (C) Microsoft Corporation. All Rights Reserved.
    Version: 12.0.2000.8
To get the latest version of BCP (and SSMS), please download CU5 for SQL Server 14 [Blog][Download]

Similar Messages

  • Restore deleted rows in sql server 2008

    Hi,
    I have problem, I used import and export wizard in sql server 2008, and select wrong database in source data and wrong database in destination data (i Reflect databases) and  in editing mapping i make check for delete rows in destination table. 
    the step final complete and  i lost my data and i don't have backup
    how i can restore my data

    its not a straight forward activity if you don't have backups, first thing you need to do is to create proper maintenance plan for you databases. You can refer below links which could give some clue about your probles.
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/86befdbe-9806-4d96-9e9f-ead753d0fc20/recover-deleted-rows-from-sql-server-database?forum=transactsql
    http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    Praveen Dsa | MCITP - Database Administrator 2008 |
    My Blog | My Page

  • 30 M rows in SQL Server 2008 R2- View or Table for faster performance

    Hello,
    I am creating a SSRS 2008 report that currently is using a view. It's a view made out of one table and is not indexed.
    There are 70 some columns that I am displaying in the report with 10 or so parameters that I am passing through a procedure from SSRS to SQL Server.
    When the report runs in Server or in development mode, the report gets out of memory error - which could be a totally different issue as it is trying to bring in a  couple million rows amd runs for 15- 20 mins.. and errors out.
    My question is if it is sourcing a single table, will a view be better with the right index or a table with index.
    Clustered, non clustered? Any suggestions or input would be greatly appreciated.
    Thank You.

    What is the exact error message?
    In SSRS you can use a stored procedure with parameters as data source. You can use a query in the sp, you don't need a view.
    >it is trying to bring in a  couple million rows amd runs  20 mins.. and errors out.
    Indexing not likely to help you. You have a huge return set problem.
    Kalman Toth Database & OLAP Architect
    SELECT Query Video Tutorial 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Loading a flat table with duplicate rows in SQL server

    Hi,
    I'm trying to load a flat table with different levels that has duplicate rows. When I'm loading it from my source SQL server enviornment to target SQL server environment.. I can only load 63 rows out of the 1225 rows.. This is happenning because i had to define a primary key on the couple of columns..
    When I just try to load it without a primary key, I get an error that PK needs to be defined for load to happen..
    My table structure looks as follows -
    Lvl1 Lvl2 Lvl3 Lvl4 AccountID AccountDesc
    How do i load all rows of data in my target table using ODI?\
    Please help

    whirlpool wrote:
    Hi,
    I'm trying to load a flat table What is a flat table ? Are you talking about FACT table ?
    When I'm loading it from my source SQL server enviornment to target SQL server environment.. I can only load 63 rows out of the 1225 rows.. This is happenning because i had to define a primary key on the couple of columns..
    When I just try to load it without a primary key, I get an error that PK needs to be defined for load to happen..
    Which IKM is in use ? I can not remember an IKM which needs a PK . Incremental Update IKM needs a Update key which can be a PK or UK at database level or ODI level.
    My table structure looks as follows -
    Lvl1 Lvl2 Lvl3 Lvl4 AccountID AccountDesc
    How do i load all rows of data in my target table using ODI?\
    If you not bother about PK at target then you can go for SQL Control Append to load your target table.
    Thanks,
    Sutirtha

  • Trying to upload rds in sql server express 2014; feature not supported in this version

    I'm trying to preview sql server 2014 and report builder capabilities; I created a rds and am trying to upload it to ssrs and I get the message that this is not supported? Is this only available in versions beyond express?
    Thanks

    Scroll down to the "Reporting Services" section, you'll see Remote Data Sources only become supported in Standard Edition and up...
    http://msdn.microsoft.com/en-us/library/cc645993.aspx

  • HIPAA Compliance & SQL Server Azure

    I am looking for some information on Azure.
    1. Is Azure SQL Database is covered under HIPAA compliance?
    2. Will MS sign HIPAA BAA?
    3. How feasible it is to store the databases of more than one HIPAA compliant applications on the same server ? Database is SQL. Does this violate HIPAA?
    4. I am willing to host multiple client's DB on same server into separate instances, is it possible?
    5. Do MS have SAN ?
    6. How do MS separate the data of more than one HIPAA compliant app from each other ?
    Please let me know the required information. Thanks.

    Hi Akshay - disclaimer, I am not a lawyer and I am simply trying to help provide you with public information to better answer your questions.  From your questions and from reading the online HIPAA documentation, it would be best that you contact
    your Microsoft Account Manager
    as Microsoft currently offers the BAA to customers who have a Volume Licensing / Enterprise Agreement (EA).
    1. Is Azure SQL Database is covered under HIPAA compliance?
    Yes, please see the following [link][link][link]
    2. Will MS sign HIPAA BAA?
    Microsoft currently offers the BAA to customers who have a Volume Licensing / Enterprise Agreement (EA) [link]
    3. How feasible it is to store the databases of more than one HIPAA compliant applications on the same server ? Database is SQL. Does this violate HIPAA?
    Azure SQL Database as a cloud platform is HIPAA compliant.  In respect to the application that is built on top of SQL Database, this is a difficult question to answer in a forum as the response is completely contingent on the design and implementation of
    the application. Please see the following for more information [link].
    4. I am willing to host multiple client's DB on same server into separate instances, is it possible?
    Azure SQL Database provisions databases that are grouped under a logical
    server.  The databases, however, are likely not on the same physical
    server.
    5. Do MS have SAN ?
    I'm not positive what you're asking, can you please clarify?
    6. How do MS separate the data of more than one HIPAA compliant app from each other ?
    Same answer as questions 3

  • Getting Changed [Status] rows between multiple rows in SQL Server 2012 based on MonthYear Field

    I am trying create Stored Proc which takes 2 inputs from user  @PreviousMonthYear,@CurrentMonthYear
    Below is my table schema.[TableIPPortStatus]
    Such 1000's of rows will be available in TableIPPortStatus for every IP Port combination for every Scan. Stored Proc will take input as @PreviousMonthYear = 'Jan-2013' and @CurrentMonthYear = 'Feb-2013'
    Expected stored proc o/p is to get changed status in @CurrentMonthYear rows compared with the same IP/Port combination from @PreviousMonthYear rows. so for for above e.g. expecting to get below result.
    Since in Scan 2222 Port 80 of IP 1.0.0.0 got closed & Port 80 of IP 1.0.0.1 got open and for 1.0.0.2 status was unchanged.
    Also, if any new IPPort combination is added in Feb-2013 that needs to make available in the output.
    Please suggest way to accomplish this in SQL. Thanks in Advance!

    DId you try this? ACtually you dont even need prevmonthyear parameter. you can simply get result using single parameter
    If its sql 2012 this is very easy
    DECLARE @CurrentMonthYear varchar(30)
    SET @CurrentMonthYear = 'Feb-2013'
    SELECT ScanId,MonthYear,IP,Port,Status
    FROM
    SELECT *,
    LAG(Status,1,'') OVER (PARTITION BY IP,Port ORDER BY CAST('01-' + MonthYear AS datetime)) AS PrevStatus
    FROM Table
    )t
    WHERE PrevStatus <> Status
    And if its sql 2008 or below
    DECLARE @CurrentMonthYear varchar(30)
    SET @CurrentMonthYear = 'Feb-2013'
    ;With CTE
    AS
    SELECT *,
    ROW_NUMBER() OVER (PARTITION BY IP,Port ORDER BY CAST('01-' + MonthYear AS datetime)) AS Seq
    FROM Table
    WHERE MonthYear = @CurrentMonthYear
    )t
    SELECT c1.*
    FROM CTE c1
    LEFT JOIN CTE c2
    ON c2.IP = c1.IP
    AND c2.Port = c1.Port
    AND c2.Seq = c1.Seq - 1
    WHERE c2.Status <> c1.Status
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Add A Total Row To SQL Server Pivot

    My pivot produces the desired outcome results, I just now need to add a row at the bottom that shows the Total Count for each week.  This is my query - what should I modify in order to have the Weekly Total displayed at the bottom?
    SELECT *
    FROM
    selecte case
    when a.businesssector Like 'AR%' THEN 'American'
    when a.businesssector LIKE 'CA%' THEN 'Canada'
    else a.businesssector
    end as [Sector],
    ID As [ID],
    CONVERT(VARCHAR(20), cal.CumulativeWeek) As [Week]
    FROM hellfire.Malone a
    INNER JOIN cal.calendar cal
    on a.saledate = cal.FullDate
    Where a.saledate is not null
    ) src
    pivot
    Count([ID])
    For Week IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11])
    ) piv

    Without DDL, I'd suggest you create your SQL as a CTE, then retrieve the results set from that and UNION ALL it to another SQL statement pulling from the same CTE that creates the SUMs grouped by the Week column.
    Thanks
    Carl

  • Utility data collection job Failure on SQL server 2008

    Hi,
    I am facing data collection job failure issue (Utility-data Collection) on SQL server 2008 server for, below is the error message as  :
    <service Name>. The step did not generate any output.  Process Exit Code 5.  The step failed.
    Job name is collection_set_5_noncached_collect_and_upload, as I gothrough the google issue related to premission issue but where exactly the access issues are coimng, this job is running on proxy account. Thanks in advance.

    Hi Srinivas,
    Based on your description, you encounter the error message after configuring data collection in SQL Server 2008. For further analysis, could you please help to collect detailed log information? You can check the job history to find the error log around the
    issue, as is mentioned in this
    article. Also please check Data Collector logs by right-clicking on Data Collection in the Management folder and selecting View Logs.
    In addition, as your post, the exit code 5 is normally a ‘Access is denied ’ code.  Thus please make sure that the proxy account has admin permissions on your system. And ensure that SQL Server service account has rights to access the cache folder.
    Thanks,
    Lydia Zhang

  • Bulk   Insert   from  SQL Server  to Oracle

    I have to load around 20 million rows from SQL Server table to Oracle table using Network Link,wrote following code using Bulk Collect,which is working but taking more time(taking 5 hrs).
    I also tried with changing table to parallel degree 8 didn't help(Also Oracle Table set to NOLOGGONG mode).
    Is there any better way to do this ? Appreciate any help in this regard .
    Script :
    CREATE OR REPLACE PROCEDURE INSERT_SQLSERVER_TO_ORACLE
    IS
    TYPE v_ARRAY IS TABLE OF TARGET_CUST%ROWTYPE INDEX BY BINARY_INTEGER;
    ins_rows v_ARRAY;
    BEGIN
    DECLARE CURSOR REC1 IS
    SELECT COL1, COL2,COL3,COL4 SOURCE_SQLSERVER_CUST;
    BEGIN
    OPEN REC1;
    LOOP
    FETCH REC1 BULK COLLECT INTO ins_rows LIMIT 5000;
    FORALL i IN ins_rows.FIRST..ins_rows.LAST
    INSERT INTO TARGET_CUST VALUES ins_rows(i);
    EXIT WHEN REC1%NOTFOUND;
    END LOOP;
    COMMIT;
    CLOSE REC1;
    END;
    END;
    Thanks in Advance.

    887204 wrote:
    I have to load around 20 million rows from SQL Server table to Oracle table using Network Link,wrote following code using Bulk Collect,which is working but taking more time(taking 5 hrs).I would not pull that data via a network link and use standard SQL insert statements. Bulk processing is meaningless in this context. It does nothing to increase the performance - as context switching is not the issue.
    The biggest factor is pulling 20 million rows's data via database link across the network. This will be slow by it's very nature.
    I would use bcp (Bulk Copy export) on SQL-Server to write the data to a CSV file.
    Zip that file. FTP/scp/sftp it to the Oracle server. Unzip it.
    Then do a parallel direct load of the data using SQL*Loader.
    This will be a lot faster than pulling uncompressed data across the network, a couple of rows at a time (together with the numerous moving parts on the Oracle side that uses a HS agent as interface between SQL-Server and the Oracle database).

  • Run SSIS package in a machine without Sql Server

    Hi All,
    I have 3 packages which are doing fairly easy job. First uploading flat file to remote server from my local machine. Second does some transformation and third load file back to my local machine. I am running all of them through batch file. Now, I have to
    run those 3 batch files in my colleague machine which doesn't have any instance of MSSQL or Visual Studio.I sent batch files to his machine and when he runs, first one is working probably which uploads file to remote server, however second batch file doesn't
    working. My guess is because batch file has some tsql syntax whereas he doesn't have MSSQL is making issue. (I added the code below). Is it possible? I can not install SQL Server to his machine but I need to run these packages(or batch files). Is there any
    idea how to achieve this? Thanks in advance for your input.  
    osql -U sa -P mypassword -S myremoteserver -Q "msdb.dbo.sp_start_job N'ExistingOpp'"
    pause

    Hi Ozkantr,
    The osql utility is installed by SQL Server. You need to make sure the utility is installed on your colleague machine.
    To run SSIS package outside the BIDS, only the DTExec utility which is a client tool is not enough, the server components for Integration Services Service (SSIS runtime) is also required. To install SSIS runtime and DTExec utility, we must install the Integration
    Services shared feature from the SQL Server install media. So, you need to install SSIS service on the machine where the SSIS packages jobs run.
    References:
    http://www.bigator.com/2012/03/11/ways-to-execute-ssis-package/
    http://www.codeproject.com/Articles/219494/SSIS-Overview-Part-I
    Regards,
    +1
    You need to install Integration Services, which installs the necessary bytes on your computer and the SSIS service as well. The service can be disabled though, you don't need it to run packages.
    I wrote a blog post about it some time ago:
    When is DTEXEC installed?
    ps: osql is deprecated. Better start using sqlcmd.
    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

  • Error when insert data in Sql Server table(DateTime data type)

    Hello all,
    I have created a database link in oracle 11g to SQL Server 2008 using Sqlserver gateway for oracle,Oracle run on Linux and SQL Server run on Windows platform.
    I have queried a table and it fetches rows from the target table.
    I am using this syntax for insert a row in Sql Server table.
    Insert into Prod@sqlserver (NUMITEMCODE, NUMPREOPENSTOCK, NUMQNTY, NUMNEWOPENSTOCK, DATPRODDATE , TXTCOMPANYCODE, "bolstatus", NUMRESQNTY )
    Values (1118 , 1390.0 , 100.0 ,1490 , '2012-06-23 12:37:58.000','SFP' ,0 , 0 );
    but it give me error on DATPRODDATE,The data type of DATPRODDATE column in Sql Server is DATETIME.
    My Question is how can i pass the date values in INSERT statement for Sql Server DateTime data type.
    Regards

    Just as with Oracle, you have to specify the date using the to_date() function or use the native date format for the target database (if you can figure out what that is). This is good practice anyway and a good habit to get into.

  • Import expoort wizard error on sql server 2008r2

    - Executing (Warning)
    Messages
    Warning: Preparation SQL Task 1: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. (SQL Server Import and Export Wizard)
    Warning: Preparation SQL Task 1: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. (SQL Server Import and Export Wizard)
    - Copying to `tblSFUpload` (Error)
    Messages
    Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
     (SQL Server Import and Export Wizard)
    Error 0xc0209029: Data Flow Task 1: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "Destination Input" (615)" failed because error code 0xC020907B occurred, and the error row disposition on "input "Destination
    Input" (615)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be error messages posted before this with more information about the failure.
     (SQL Server Import and Export Wizard)
    Error 0xc0047022: Data Flow Task 1: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "Destination - tblSFUpload" (604) failed with error code 0xC0209029 while processing input "Destination Input" (615).
    The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.  There may be error messages posted before this with more information
    about the failure.
     (SQL Server Import and Export Wizard)
    Error 0xc02020c4: Data Flow Task 1: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
     (SQL Server Import and Export Wizard)
    Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on component "Source - tblSFUpload" (1) returned error code 0xC02020C4.  The component returned a failure code when the pipeline engine
    called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the failure.
     (SQL Server Import and Export Wizard)
    - Post-execute (Success)
    Messages
    Information 0x4004300b: Data Flow Task 1: "component "Destination - tblSFUpload" (604)" wrote 65535 rows.
     (SQL Server Import and Export Wizard)
    Can someone tell me what to do with this please?

    Hi faiz,
    >>Error 0xc02020c4: Data Flow Task 1: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
    According to this message, it can be caused by that there is not enough memory available or the buffer size is too small. To resolve this issue, I recommend you perform the following steps.
    1. Change the Server memory configuration of the SQL server. For example, you can increase maximum memory via the following scripts. For more information, please review this article:
    Server Memory Server Configuration Options.
    sp_configure 'show advanced options', 1;
    GO
    RECONFIGURE;
    GO
    sp_configure 'max server memory', 4096;
    GO
    RECONFIGURE;
    GO
    2. Adjust the buffer size by increasing the DefaultBufferMaxSize and DefaultBufferMaxRows properties of Data Flow tasks. For more information about it, please review this article:
    Data Flow Performance Features.
    Thanks,
    Lydia Zhang

  • How can this encoding be saved in SQL Server 2008

    hi all,
    We are using Sql Server 2008, it has this collation set "SQL_Latin1_General_CP1_CI_AS". Please have a look at this file:
    https://drive.google.com/file/d/0BxWAyvJA9ZjCdkFNN3BXZUw3dEE/view?pli=1
    its a text file in text format (ascii test data format aka ATDF). Its coming from French vendor and it contains some French words. We have to parse this and upload data to sql server. We do that all the time except this file is not working. It just disturbs
    the way sql server is saving data for us, and upon showing this saved data on application UI, the correct letters don't display. The upload happens via the Bulk Insert command. Please guide as to where to start. Also please let me know exactly what this format
    is called, do we call it just unicodes, or some utf formats. I'm really ignorant when it comes to unicode. Thanks a lot!
    ..ab
    Ab

    Hi
    Please use Unicode data types on the SQL Server side. Please see the following document:
    http://msdn.microsoft.com/en-us/library/ms187828(v=SQL.105).aspx 
    Thanks
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Require help with Pivot table query in SQL Server 2008

    Hi,
    I have a query regarding converting columns to rows in SQL Server 2008. Please look at the table below. 
    I need the output to look something like this :
    The columns for the children can be dynamic or fixed ( max of 6 children) based on the Family_ID.  For Example: A family can have 1 child or more than 1 child.
    Not sure how to go about it. Would appreciate your help :)

    Looks like you need dynamic pivot on multiple columns. I have two articles on this topic, start from this one
    T-SQL:
    Dynamic Pivot on Multiple Columns
    It has reference to my other blog post.
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

Maybe you are looking for