Log on data load through a BW data flow

Dears,
I am requesting to all of you who have already implemented this type of functionality. I am trying to find the easiet way, with less complexity, to implement a log through an existing BW data flow.
I mean data load by an infopackage give some log on right and wrong records within the monitor, how can I utilize this information? is there a specific table which stored each record and their message? Or a program has to be implemented which will publish laoding status in a specific table?
Thanks for your quick feedback,
LL

Hi Ludovic
The monitor messages are only written if there is some problem in the record processing. You can only find information for those records which have problem or if the processing during the routines encountered some problem.
What you can do to capture messages is write one transfer routine and amend the monitor messages table rsmonmess for the same.
Also,please check the tables starting with RSMO*
regards
Vishal

Similar Messages

  • Data load through DTP giving Error while calling up FM RSDRI_INFOPROV_READ

    Hi All
    We are trying to load data in Cube through DTP from DSO. In the Transformation, we are looking up Infocube data through SAP Standard Function Module 'RSDRI_INFOPROV_READ'. The Problem we are facing is that our loads are getting failed & it is giving error as 'Unknown error in SQL Interface' & Parallel process error.
    In the DTP, We have Changed the No. of Parallel processes from 3 (default) to 1 but still the above issue exists with data loads.
    We had similar flow developed in 3.5 (BW 3.5 Way) where we had used this Function Module 'RSDRI_INFOPROV_READ' & there our data loads are going fine.
    We feel there is compatability issue of this FM with BI 7.0 data flows but are not sure. If anybody has any relevant inputs on this or has used this FM with BI 7.0 flow then please let me know.
    Thanks in advance.
    Kind Regards
    Swapnil

    Hello Swapnil.
    Please check note 979660 which mentions this issue ?
    Thanks,
    Walter Oliveira.

  • Check data before loading through SQL *Loader

    Hi all,
    I have a temp table which is loaded through SQL*Loader.This table is used by a procedure for inserting data into another table.
    I get error of 0RA-01722 frequently during procdures execution.
    I have decided to check for the error data through the control file itself.
    I have few doubts about SQL Loader.
    Will a record containing character data for column declared as INTEGER EXTERNAL in ctrl file get discarded?
    Does declaring column as INTERGER EXTERNAL take care of NULL values?
    Does a whole record gets discarded if one of the column data is misplaced in the record in input file?
    Control File is of following format:
    LOAD DATA
    APPEND INTO TABLE Temp
    FIELDS TERMINATED BY "|" optionally enclosed by "'"
    trailing nullcols
    FILEDATE DATE 'DD/MM/YYYY',
    ACC_NUM INTEGER EXTERNAL,
    REC_TYPE ,
    LOGO , (Data:Numeric Declared:VARCHAR)
    CARD_NUM INTEGER EXTERNAL,
    ACTION_DATE DATE 'DD/MM/YYYY',
    EFFECTIVE_DATE DATE 'DD/MM/YYYY',
    ACTION_AMOUNT , (Data:Numeric Declared:NUMBER)
    ACTION_STORE , (Data:Numeric Declared:VARCHAR)
    ACTION_AUTH_NUM ,
    ACTION_SKU_NUM ,
    ACTION_CASE_NUM )
    What changes do I need to make in this file regarding above questions?

    Is there any online document for this?<br>
    Here it is

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • CR2008  Cannot report on IIS Log file data source

    I have CR2008 (SP0) and CRXIR2 installed on a Vista desktop.   I can create reports against IIS Log files using XIR2.   When I attempt to make the data connection using CR2008, I go through the same dialog to select log files and dates but at the end it displays "no items found" and I have no table connection that I can add to the report.
    My primary source is IIS 6 log files on a server using a mapped drive.  I have also tried the same using local IIS 7 on the same Vista pc that Crystal is installed on and neither work using CR2008;  both work fine from CRXIR2.
    I also opened from CR2008 an IIS log report created in CRXIR2 and oddly the report runs from 2008 but I cannot modify the data connection as I again just receive "no items found" if I attempt to establish another connection.
    I've tried a re-install and I've confirmed in Crystal setup that I have installed Web Activity Logs as Data source.

    Hi there,
    Try the following:
    1.  Connect to directory of where your IIS server that contains the log file.  It should be in C:\Windows\system32\LogFiles\W3SVC1
    (Or alternatively, I should suggest copying one of these files over locally to your workstation to test.  The file should be in the format of ex*.log)
    2.  Open up CR2008, and do a "Create New Connection"
    3.  Then choose "More Data Sources" -> "MS IIS/Proxy Log Files"
    4.  Point to where your logfile is, whether locally or remotely on the server. 
    5.  A "Select Log Files and Dates" window should appear
    6.  Under "Enter Log File Format and Location" panel (at the top), choose "Extend (ex*.log)" format
    7.  Browse to the file that you would to report off.

  • How to efficiently log multiple data streams with TDMS

    Ok, first off, I'll admit I am completely clueless when it comes to logging, TDMS in particular.  That said, I'm trying to work out the best way to log some data from an existing LabVIEW-based control system, so that users can later access that data in the event of catastrophic failure or other situations where they might want to see exactly what happened during a particular run.
    I've got a total of between 6 and 12 data points that need to be stored (depending on how many sensors are on the system).  These are values being read from a cRIO control system.  They can all be set to Single data type, if necessary - even the one Boolean value I'm tracking is already being put through the "convert to 0,1" for graph display purposes.  The data is currently read at 100ms intervals for display, but I will be toying with the rate that I want to dump data to the disk - a little loss is OK, just need general trending for long term history.  I need to keep file sizes manageable, but informative enough to be useful later.
    So, I am looking for advice on the best way to set this up.  It will need to be a file that can be concurrently be read as it is being written, when necessary - one of the reasons I am looking at TDMS in the first place (it was recommended to me previously).  I also need an accurate Date/Time stamp that can be used when displaying the data graphically on a chart, so they can sync up with the external camera recordings to correlate just what happened and when.
    Are there specific pitfalls I should watch for?  Should I bundle all of the data points into an array for each storage tick, then decimate the array on the other end when reading?  I've dug through many of the examples, even found a few covering manual timestamp writing, but is there a preferred method that keeps file size minimized (or extraction simplified)?
    I definitely appreciate any help...  It's easy to get overwhelmed and confused in all of the various methods I am finding for handling TDMS files, and determining which method is right for me.

    I need to bump this topic again...  I'll be honest, the TDMS examples and available help are completely letting me down here.
    As I stated, I have up to 12 data values that I need to stream into a log file, so TDMS was suggested to me.  The fact that I can concurrently read a file being written to was a prime reason I chose this format.  And, "it's super easy" as I was told...
    Here's the problem.  I have multiple data streams.  Streams that are not waveform data, but actual realtime data feedback from a control system, that is being read from a cRIO control system into a host computer (which is where I want to log the data).  I also need to log an accurate timestamp with this data.  This data will be streamed to a log file in a loop that consistently writes a data set every 200ms (that may change, not exactly sure on the timing yet).
    Every worthwhile example that I've found has assumed I'm just logging a single waveform, and the data formatting is totally different from what I need.  I've been flailing around with the code, trying to find a correct structure to write my data (put it all in an array, write individual points, etc) and it is, quite honestly, giving me a headache.  And finding the correct way for applying the correct timestamp (accurate data and time the data was collected) is so uncharacteristically obtuse and hard to track down...  This isn't even counting how to read the data back out of the file to display for later evaluation and/or troubleshooting...  Augh!
    It's very disheartening when a colleague can throw everthing I'm trying to do together in 12 minutes in the very limited SCADA user interface program he uses to monitor his PLCs...  Yet LabVIEW, the superior program I always brag about, is slowly driving me insane trying to do what seems like a relatively simple task like logging...
    So, does anyone have any actual useful examples of logging multiple DIFFERENT data points (not waveforms) and timestamps into a TDMS file?  Or real suggestions for how to accomplish it, other than "go look at the examples" which I have done (and redone).  Unless, of course, you have an actual relevant example that won't bring up more questions than it answers for me, in which case I say "bring it on!"
    Thanks for any help...  My poor overworked brain will be eternally grateful.

  • Error logging for data rules in owb11gr2

    Hi all,
    I was playing around with error logging for data rules and I realized that when an error gets logged into the error table for failing a particualar data rule for a table, some of the columns in the error table such as ORA_ERR_NUMBER$, ORA_ERR_MESG$, ORR_ERR_OPTYP$ were not filled in. Why is this so? Is there anyway to populate these fields as well when a row gets populated in? This is because the optype field may be useful to identify the operation type of the erronous row.
    ALso, does anyone know whether the error table for dimensions work correctly? I replicate the portion of the mapping flow that goes to my dimension and even though the errornous row gets logged into the error table, the ERR$$$_OPERATOR_NAME for that row did not show the dimension object but instead show another of my table operator in the mapping. Pretty bewildered as to why this is the case.

    The cube operator in 11gR2 also supports DML error logging (as well as orphan management handling). This is enabled by setting the property 'DML Error table name' (in group Error table) on the Cube operator inside the mapping. The error table specified will be created when the mapping is deployed (if you specify an existing one the error is trapped).
    The DML error handling will catch physical errors on the load of the fact.
    Cheers
    David

  • Data flows are getting started but not completing successfully while extracting/loading of the data

    Hello People,
    We are facing a abnormal behavior with the dataflows in the data services job.
    Scenario:
    We are extracting the data from CRM end in parallel. Please refer the build:
    a. We have 5 main workflows flows i.e :
       => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
       => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
       => Main WF3 has 1 DF in parallel.
       => Main WF4 has 3 DF in parallel.
       => Main WF5 has 1 WF & a DF in sequence.
    b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
    c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
    d. Observations in the Monitor Log:
    Dataflow---------------------- State----------------RowCnt------LT-------AT------ 
    +DF1/ZABAPDF
    PROCEED
    234000
    8.113      394.164
    /DF1/Query
    PROCEED
    234000
    8.159      394.242
    -DF1/Query_2
    PROCEED
    234000
    8.159      394.242
    Where LT: Lapse Time and AT: Absolute time
    If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
    In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.
    The row count for DF1 extraction is 234204 but, it got stuck at  234000.
    Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
    e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
    Observations in the Trace log:
    DATAFLOW: Process to execute data flow <DF1> is started.
    DATAFLOW: Data flow <DF1> is started.
    ABAP: ABAP flow <ZABAPDF> is started.
    ABAP: ABAP flow <ZABAPDF> is completed.
    Cache statistics determined that data flow <DF1>
    uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
    Statistics is switching the cache type to IN MEMORY.
    DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
    DATAFLOW: <DF1> is completed successfully.
    The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
    Note: The cache type is pageable cache, DS ver is 3.2.
    Please suggest.
    Regards,
    Santosh

    Hi Santosh,
    just a wild guess.
    Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.
    Some time reference does not work.
    Hope this should work.
    Regards,
    Shiva Sahu

  • Need help in logging JTDS data packets

    Hi All,
    I m having web application which uses SQL Server database.
    I have to find out some problems in database connection for that there is need to log the jtds data packets.
    I have tried to use class net.sourceforge.jtds.jdbc.TdsCore but in constructor of TdsCore class there are two parameters needed one is ConnectionJDBC2 and another is SQLDiagnostic.
    I have tried a lot but it did not allow me to import class *SQLDiagnostic*.
    I need help in logging JTDS data packets. If there are any other ways or any body having any idea about logging JTDS data packets/SQLDiagnostic.
    Please reply it is urgent...!!
    Thanks in advance......!!

    if you want to use log4j then,
    in your project create a file called log4j.properties and add this
    # Set root logger level to INFO and its only appender to ConsoleOut.
    log4j.rootLogger=INFO,ConsoleOut
    # ConsoleOut is set to be a ConsoleAppender.
    log4j.appender.ConsoleOut=org.apache.log4j.ConsoleAppender
    # ConsoleOut uses PatternLayout.
    log4j.appender.ConsoleOut.layout=org.apache.log4j.PatternLayout
    log4j.appender.ConsoleOut.layout.ConversionPattern=%-5p: [%d] %c{1} - %m%n
    log4j.logger.org.apache.jsp=DEBUG
    #Addon for
    com.sun.faces.level=FINEGo to your class and add this line
    private static final Logger logger = Logger.getLogger("classname");and then you can use
    logger.info();
    logger.error();
    methods

  • DS 4.2 get ECC CDHDR deltas in ABAP data flow using last run log table

    I have a DS 4.2 batch job where I'm trying to get ECC CDHDR deltas inside an ABAP data flow.  My SQL Server log table has an ECC CDHDR last_run_date_time (e.g. '6/6/2014 10:10:00') where I select it at the start of the DS 4.2 batch job run and then update it to the last run date/time at the end of the DS 4.2 batch job run.
    The problem is that CDHDR has the date (UDATE) and time (UTIME) in separate fields and inside an ABAP data flow there are limited DS functions.  For example, outside of the ABAP data flow I could use the DS function concat_date_time for UDATE and UTIME so that I could have a where clause of 'concat
    _date_time(UDATE, UTIME) > last_run_date_time and concat_date_time(UDATE, UTIME) <= current_run_date_time'.  However, inside the ABAP data flow the DS function concat_date_time is not available.  Is there some way to concatenate UDATE + UTIME inside an ABAP data flow?
    Any help is appreciated.
    Thanks,
    Brad

    Michael,
    I'm trying to concatenate date and time and here's my ABAP data flow where clause:
    CDHDR.OBJECTCLAS in ('DEBI', 'KRED', 'MATERIAL')
    and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) > $CDHDR_Last_Run_Date_Time)
    and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) <= $Run_Date_Time)
    Here are DS print statements showing my global variable values:
    $Run_Date_Time is 2014.06.09 14:14:35
    $CDHDR_Last_Run_Date_Time is 1900.01.01 00:00:01
    The issue is I just created a CDHDR record with a UDATE of '06/09/2014' and UTIME of '10:48:27' and it's not being pulled in the ABAP data flow.  Here's selected contents of the generated ABAP file (*.aba):
    PARAMETER $PARAM1 TYPE D.
    PARAMETER $PARAM2 TYPE D.
    concatenate CDHDR-UDATE ' ' into ALTMP1.
    concatenate ALTMP1 CDHDR-UTIME into ALTMP2.
    concatenate CDHDR-UDATE ' ' into ALTMP3.
    concatenate ALTMP3 CDHDR-UTIME into ALTMP4.
    IF ( ( ALTMP4 <= $PARAM2 )
    AND ( ALTMP2 > $PARAM1 ) ).
    So $PARAM1 corresponds to $CDHDR_Last_Run_Date_Time ('1900.01.01 00:00:01') and $PARAM2 corresponds to $Run_Date_Time ('2014.06.09 14:14:35').  But from my understanding ABAP data type D is for date only (YYYYMMDD) and doesn't include time, so is my time somehow being defaulted to '00:00:00' when it gets to DS?  I ask this as a CDHDR record I created on 6/6 wasn't pulled during my 6/6 testing but this 6/6 CDHDR record was pulled today.
    I can get  last_run_date_time and current_run_date_time into separate date and time fields but I'm not sure how to build the where clause using separate date and time fields.  Do you have any recommendations or is there a better way for me to pull CDHDR deltas in an ABAP data flow using something different than a last run log table?
    Thanks,
    Brad

  • How can I log the data transmission of my switch in a file to analyze the quality of my communication channel?

    How can I log the data transmission of my switch in a file to analyze the quality of my communication channels?

    A lot depends on what type of switch you have and what kind of communication channels you're asking about.
    There are several Cisco tools (e.g., "ip sla", SNMP-queried values, show commands etc.) that can give useful information.
    If you give us some more information we can help more specifically.

  • To Find the user log off date

    Hello Gurus
          I need to find the user log off and log on details.Suppose if the user logs of today and Logs on tommmorow...Then I need to get both those log off and log on details.The log details can be found from USR02 table..Plz help me in finding out the log off date of the user..IThanks
    Ganesh
    Edited by: Ganesh Kumar on Mar 9, 2009 6:57 PM

    Explore SM19 and SM20 tcodes for your requirement.

  • How to retrieve users logging-in and logging-out date and times in SharePoint

    At the moment I am using SherePoint 2013 with a few tenants.
    I am going to have access to the users logging-in and logging-out dates and times.
    For instance, I would like to know the detail of the dates and times which a particular user of a tenant has logged-in and logged-out during the past few months.
    Any idea?

    You can retrieve that info from the IIS log files. Maybe you can use a free IIS reporting tool that I've built and adjust it to your own needs, you can get it here:
    http://gallery.technet.microsoft.com/office/The-SharePoint-Flavored-5b03f323
    Btw, in a web environment usually there is no such thing as the log-out date and time because the end user just stops making requests. So, you've got to take a look at the last request and by default, after 20 minutes the session times out and you can assume
    the session has ended.
    Kind regards,
    Margriet Bruggeman
    Lois & Clark IT Services
    web site: http://www.loisandclark.eu
    blog: http://www.sharepointdragons.com

  • Can't log into Data Services

    Hello Gurus,
    I get this error message "Cannot initialize application. (BODI-1270039)" when I try to log into Data Services.
    I've tried reinstalling the Data Services (Client) but still the same problem.
    What's causing this issue?

    Hi l.v,
    In Windows, in the run option and put in the 'regredit' command and look under Hkey local and Hkey current user and make sure that you do not have BusinessObjects entry if you do then please delete them.
    It is assumed that you are only installing DS on this PC if this is not the case then you need to be extra careful and only remove the entries that belong to DS.
    Cheers
    Hai.

  • Data flow tasks faills while loading from database to excel

    Hello,
    I am getting error while loading from oledb source to excel and the error as shown below.
    Error: 0xC0202009 at DFT - Company EX, OLE DB Destination [198]: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    Error: 0xC0209029 at DFT - Company EX, OLE DB Destination [198]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (211)" failed because error code 0xC020907B occurred, and the error row
    disposition on "input "OLE DB Destination Input" (211)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be error messages posted before this with more information about the
    failure.
    Error: 0xC0047022 at DFT - Company EX: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "OLE DB Destination" (198) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput
    method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.  There may be error messages posted before this with more information about the failure.
    Error: 0xC02020C4 at DFT - Company EX, OLE DB Source 1 [1]: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
    Error: 0xC0047021 at DFT - Company EX: SSIS Error Code DTS_E_THREADFAILED.  Thread "WorkThread0" has exited with error code 0xC0209029.  There may be error messages posted before this with more information on why the thread has exited.
    Error: 0xC0047038 at DFT - Company EX: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on component "OLE DB Source 1" (1) returned error code 0xC02020C4.  The component returned a failure code when the pipeline engine
    called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the failure.
    Error: 0xC0047021 at DFT - Company EX: SSIS Error Code DTS_E_THREADFAILED.  Thread "SourceThread0" has exited with error code 0xC0047038.  There may be error messages posted before this with more information on why the thread has exited.
    Any help would be appreciated ASAP.
    Thanks,
    Vinay s

    You can use this code to import from SQL Server to Excel . . .
    Sub ADOExcelSQLServer()
    ' Carl SQL Server Connection
    ' FOR THIS CODE TO WORK
    ' In VBE you need to go Tools References and check Microsoft Active X Data Objects 2.x library
    Dim Cn As ADODB.Connection
    Dim Server_Name As String
    Dim Database_Name As String
    Dim User_ID As String
    Dim Password As String
    Dim SQLStr As String
    Dim rs As ADODB.Recordset
    Set rs = New ADODB.Recordset
    Server_Name = "EXCEL-PC\EXCELDEVELOPER" ' Enter your server name here
    Database_Name = "AdventureWorksLT2012" ' Enter your database name here
    User_ID = "" ' enter your user ID here
    Password = "" ' Enter your password here
    SQLStr = "SELECT * FROM [SalesLT].[Customer]" ' Enter your SQL here
    Set Cn = New ADODB.Connection
    Cn.Open "Driver={SQL Server};Server=" & Server_Name & ";Database=" & Database_Name & _
    ";Uid=" & User_ID & ";Pwd=" & Password & ";"
    rs.Open SQLStr, Cn, adOpenStatic
    ' Dump to spreadsheet
    With Worksheets("sheet1").Range("a1:z500") ' Enter your sheet name and range here
    .ClearContents
    .CopyFromRecordset rs
    End With
    ' Tidy up
    rs.Close
    Set rs = Nothing
    Cn.Close
    Set Cn = Nothing
    End Sub
    Also, check this out . . .
    Sub ADOExcelSQLServer()
    Dim Cn As ADODB.Connection
    Dim Server_Name As String
    Dim Database_Name As String
    Dim User_ID As String
    Dim Password As String
    Dim SQLStr As String
    Dim rs As ADODB.Recordset
    Set rs = New ADODB.Recordset
    Server_Name = "LAPTOP\SQL_EXPRESS" ' Enter your server name here
    Database_Name = "Northwind" ' Enter your database name here
    User_ID = "" ' enter your user ID here
    Password = "" ' Enter your password here
    SQLStr = "SELECT * FROM Orders" ' Enter your SQL here
    Set Cn = New ADODB.Connection
    Cn.Open "Driver={SQL Server};Server=" & Server_Name & ";Database=" & Database_Name & _
    ";Uid=" & User_ID & ";Pwd=" & Password & ";"
    rs.Open SQLStr, Cn, adOpenStatic
    With Worksheets("Sheet1").Range("A2:Z500")
    .ClearContents
    .CopyFromRecordset rs
    End With
    rs.Close
    Set rs = Nothing
    Cn.Close
    Set Cn = Nothing
    End Sub
    Finally, if you want to incorporate a Where clause . . .
    Sub ImportFromSQLServer()
    Dim Cn As ADODB.Connection
    Dim Server_Name As String
    Dim Database_Name As String
    Dim User_ID As String
    Dim Password As String
    Dim SQLStr As String
    Dim RS As ADODB.Recordset
    Set RS = New ADODB.Recordset
    Server_Name = "Excel-PC\SQLEXPRESS"
    Database_Name = "Northwind"
    'User_ID = "******"
    'Password = "****"
    SQLStr = "select * from dbo.TBL where EMPID = '2'" 'and PostingDate = '2006-06-08'"
    Set Cn = New ADODB.Connection
    Cn.Open "Driver={SQL Server};Server=" & Server_Name & ";Database=" & Database_Name & ";"
    '& ";Uid=" & User_ID & ";Pwd=" & Password & ";"
    RS.Open SQLStr, Cn, adOpenStatic
    With Worksheets("Sheet1").Range("A1")
    .ClearContents
    .CopyFromRecordset RS
    End With
    RS.Close
    Set RS = Nothing
    Cn.Close
    Set Cn = Nothing
    End Sub
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

Maybe you are looking for

  • Can I clone and time machine backup on the same external drive?

    I recently purchased a WD 2 TB external usb drive.  I want to make sure that I have good backups in case my Macbook decides to give up on me.  Should I clone a copy of HD or just time machine backup?  Can i do both on one external hd?

  • Using methods with refrence parameters

    I'm having problems completing this program I'm writing for class. I must use the setNum and getNum methods. and its must read input files as such: John Smith 9.45 40 15 Jane Doe 12.50 45 15 I'm keep getting a boolean identifier error. Are there any

  • Planning attributes in 11.1.2.2 and start/end dates in a cell

    Hi. In Planning 11.1.2.2., can I make use of attribute dimensions as I would in a BSO Essbase cube? Also can I enter a start date and an end date entry in two different account members (e.g., start_date and end_date)? Thanks

  • How to cancel an active/running job in BW?

    Hi All, I have a job running in the system for 2 hrs now. I want to cancel it, please suggest, how to do the same. Thanks, Raj

  • Scale-to-fit with sticky menu

    Hey everybody, I have used Sara's scale-to-fit code, works perfectly. There are two things though that I just can't get around. I have created a sticky menu, that sticks to the top after a certain amount of scrolling. using, $("body").append(sym.$('M