Reading a range of data using Storage vi

Hello,
My data file is in TDM format and is about 300 MB in size. I want to know if it is possible to use the "Storage" Vi to read in only a range of the data at a time (or be able to just select a range to read)?
Thanks
Lancer

Hi Lancer,
As far as I know, you can only read the full length of the selected data in the TDM file.
Does anyone have a solution for Lancer that doesn't read all the data from the TDM file into memory, but only the selected range?
- Philip Courtois, Thinkbot Solutions

Similar Messages

  • How do I read a range of data from an open and "live" Excel file using LV7.0 Express

    I need to interface with software which continuously (once per second) writes a new array to a fixed location in an open Excel file. I would like to read this data into Labview from where I can do what I like with it. I am relatively new to LabView and have tried all "Read Data" examples that come with the product, Active examples seem very unclear. Anyone got any samples/suggestions to get me started?
    Attachments:
    Changing_Excel_file.xls ‏17 KB

    It is possible but might be a little bit complicated. Does your application opens the Excel file or Excel file was opened through Excel or another application already?
    In the first senario, once the excel is opened, start reading the data, do not close the report file (dispose the report) or close the excel file
    In the second senario, it is a little bit more complicated. You need to use low level Excel ActiveX functions. The procedure is:
    1. Open reference to Excel
    2. Activate the desired workbook if it is already opened
    or open a new file
    3. Activate the sheet containing the data
    4. Read the data
    5. Loop if necessary (step 2 if diff wb, step 3 if diff sht, 4 if same sheet)
    6. Close Excel reference (Very important, close the
    ref only, do not use Application.Exit to exit Excel).
    Hope this helps,
    -Joe

  • InfoPath 2013 Read SharePoint 2013 File data using Rest API Access Denied Exception

    I am designing a set of Forms and they need to query Data from among themselves.
    The whole set up described below works in the Form Filler/Preview
    I'll call them Form A and Form B
    Form A has a repeating table that needs to be displayed in Form B
    The user selects from a DropDown in Form B an Instance of Form A, using the selected I REST connection is executed so the Form A xml is available inside Form B. The connection is set up as follows:
    _api/web/lists/ListName/Items(SelectedId)/File/$value
    I publish the form as site content type, add it to a library, after triggering the REST connection I get an error. ULS gives me a 401 Access denied for NT Authority\IUSR (as it should since I don't have anonymous access enabled [nor has that solved the issue])
    That's my issue. All requests on the REST api are being executed as anonymous and not as a user that should have permission.
    Things I've tried:
    1. The connection uses a UDCX file, the conenction is set to use the form server proxy. The proxy has been enabled for the Form Services, web application and user connection. I've tried it with a configured App ID or an Explicit account
    2. I've tried enabling Anonymous access, but have had no success
    3. I've gotten the Query to work on Post Backs by adding the following to the web.config:
    <location path="_layouts/15/Postback.FormServer.aspx">
        <system.web>
          <identity impersonate="false" userName="bhs\sp_admin_dev" password="M1crosoft" />
        </system.web>
      </location>
    And while it solves the issue for Postback requests and I could add FormServer.aspx to the list I can't use this solution for a production environment, nor can I predict other issues that could be caused by the change.
    I haven't been able to find any references to this error so I wonder if I'm doing something wrong or if there's another way to do this.
    If I've been unclear on anything, let me know and I'll try to clear it up.

    Hi Choggo,
    thank you for your information,
    regarding this issue, it seems we may need to debug and trace your network, to check if should the parameter that is used for the REST connection is correct.
    i checked with infopath team members regarding this issue, they suggest that you try with impersonation, so that the user that login is not anonymous, but the user that you already been assign with.
    the last suggestions from our sharepoint team members that we are able to do, as we have limited tools on this forum support, that you need to check the file udcx itself, do the permission to access that file is correct, so for example, if the file is not
    having the permission to be read/access then the system may result with anonymous account, so that we may have the result that the data that should be passed are able to accessed.
    if should this suggestion not applicable to your environment, our sharepoint team members suggest that you to open an incident ticket, so that we can check and re-confirm more deep for you if should this is an undocumented feature or not.  the action
    plans is to have a remote session, then we can trace the data passing process, that is already correct, so that the IUSR is not appear when it authenticate.
    http://support.microsoft.com/contactus/?wa=wsignin1.0
    Regards,
    Aries
    Microsoft Online Community Support
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Paging data using rownum

    I am trying to fetch a range of data using rownum.
    1. This select statmenet works
    SELECT * from (SELECT id, name, seq from test order by seq desc) where rownum>=1 and rownum<=15
    2. This select statmenet does not work
    SELECT * from (SELECT id, name, seq from test order by seq desc) where rownum>=3 and rownum<=15
    3. This select statment returns wrong rows
    SELECT * from (SELECT id, name, seq, rownum as rn from test order by seq desc) where rn>=3 and rn<=15
    4. This works, but two level deep
    SELECT * from (SELECT id, name, seq, rownum as rn from (SELECT id, name, seq from test order by seq desc)) where rn>=3 and rn<=15
    Is there a better way to fetch a range od data?
    Thanks
    YI

    Generally, we do'nt need the query output based upon rownum because Rownum is determined after the complete resultset has been retrieved, but still if you need the query output based upon Rownum it should be something like this :
    SCOTT@orcl> select empno,rownum from emp;
         EMPNO     ROWNUM
            10          1
          7499          2
          7521          3
          7566          4
          7654          5
          7698          6
          7782          7
          7788          8
          7839          9
          7844         10
          7876         11
         EMPNO     ROWNUM
          7900         12
          7902         13
          7934         14
          9999         15
    15 rows selected.
    SCOTT@orcl> ed
    Wrote file afiedt.buf
      1  select * from ( select empno, row_number() over (order by empno) rorder from emp)
      2* where rorder between 2 and 4
    SCOTT@orcl> /
         EMPNO     RORDER
          7499          2
          7521          3
          7566          4
    SCOTT@orcl>Regards
    Girish harma
    Edited by: Girish Sharma on Mar 26, 2011 5:17 PM

  • How do you remove back up data from the memory storage? my storage data states that i have over 80gb of data used for back ups and i dont know why as i use a external hard drive as a time machine .now my 250gb flash storage is nearly full

    how do you remove back up data from the memory storage? my storage data states that i have over 80gb of data used for back ups and i dont know why as i use a external hard drive as a time machine .now my 250gb flash storage is nearly full.. HELP!

    When Time Machine backs up a portable Mac, some of the free space will be used to make local snapshots, which are backup copies of recently deleted files. The space occupied by local snapshots is reported as available by the Finder, and should be considered as such. In the Storage display of System Information, local snapshots are shown as  Backups. The snapshots are automatically deleted when they expire or when free space falls below a certain level. You ordinarily don't need to, and should not, delete local snapshots yourself. If you followed bad advice to disable local snapshots by running a shell command, you may have ended up with a lot of data in the Other category. Ask for instructions in that case.
    See this support article for some simple ways to free up storage space.

  • How to read a tab seperated data from a text file using utl_file

    Hi,
    How to read a tab seperated data from a text file using utl_file...
    I know if we use UTL_FILE.get_line we can read the whole line...but i need to read the tab separated value separately.....
    Thanks in advance...
    Naveen

    Naveen Nishad wrote:
    How to read a tab seperated data from a text file using utl_file...
    I know if we use UTL_FILE.get_line we can read the whole line...but i need to read the tab separated value separately.....If it's a text file then UTL_FILE will only allow you to read it a line at a time. It is then up to you to split that string up (search for split string on this forum for methods) into it's individual components.
    If the text file contains a standard structure on each line, i.e. it is a fixed delimited structure, then you could use external tables to read the data instead.

  • Problems reading my backed up data on hard drive after using Time Machine

    I have a MacBookPro and had to take it in for servicing. I backed up all my data using Time machine. They had to replace my battery, hard drive, and logic board, basically wiping my slate clean. When I got my computer back and plugged in my hard drive time machine came up asking if I wanted to restore a back up. I chose the drive. Then that drive disappeared from the desktop and changed the disk name from "BACKUP" to "disk1s4" and is unreadable. Cannot get anything off the hard drive.
    Any idea how I can read my data off my hard drive. I have 3+ years of pictures of my 2 kids and all my music. The rest I could take or leave. But I would be devastated to lose my pictures.
    Please help
    Thank you
    Message was edited by: Jlk51496
    Message was edited by: Jlk51496

    Jlk51496 wrote:
    When I got my computer back and plugged in my hard drive time machine came up asking if I wanted to restore a back up.
    Hi, and welcome to the forums.
    Do you mean this window?
    or this one?
    They are very different, obviously.
    Try repairing the drive, per #A5 in [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] (or use the link in *User Tips* at the top of this forum).
    Why was your Mac seviced? What did they replace?

  • How to read data using SQLGetData from a block, forward-only cursor (ODBC)

    Hi there.  I am trying to read data a small number of rows of data from either a Microsoft Access or Microsoft SQL Server (whichever is being used) as quickly as possible.  I have connected to the database using the ODBC API's and have run a select
    statement using a forward-only, read-only cursor.  I can use either SQLFetch or SQLExtendedFetch (with a rowset size of 1) to retrieve each successive row and then use SQLGetData to retrieve the data from each column into my local variables.  This
    all works fine.
    My goal is to see if I can improve performance incrementally by using SQLExtendedFetch with a rowset size greater than 1 (block cursor).  However, I cannot figure out how to move to the first of the rowset returned so that I can call SQLGetData to retrieve
    each column.  If I were using a cursor type that was not forward-only, I would use SQLSetPos to do this.  However, using those other cursor types are slower and the whole point of the exercise is to see how fast I can read this data.  I can
    successfully read the data using a block forward only cursor if I bind each column to an array in advance of the call to SQLExtendedFetch.  However, that has several drawbacks and is documented to be slower for small numbers of rows.  I really
    want to see what kind of speed I can achieve using a block, forward-only, read-only cursor using SQLGetData to get each column.
    Here is the test stub that I created:
    ' Create a SELECT statement to retrieve the entire collection.
    selectString = "SELECT [Year] FROM REAssessmentRolls"
    ' Create a result set using the existing read/write connection. The read/write connection is used rather than
    ' the read-only connection because it will reflect the most recent changes made to the database by this running
    ' instance of the application without having to call RefreshReadCache.
    If (clsODBCDatabase.HandleDbcError(SQLAllocStmt(gDatabase.ReadWriteDbc, selectStmt), gDatabase.ReadWriteDbc, errorBoxTitle) <> enumODBCSQLAPIResult.SQL_SUCCESS) Then
    GoTo LoadExit
    End If
    Call clsODBCDatabase.HandleStmtError(SQLSetStmtOption(selectStmt, SQL_CONCURRENCY, SQL_CONCUR_READ_ONLY), selectStmt, errorBoxTitle)
    Call clsODBCDatabase.HandleStmtError(SQLSetStmtOption(selectStmt, SQL_CURSOR_TYPE, SQL_CURSOR_FORWARD_ONLY), selectStmt, errorBoxTitle)
    Call clsODBCDatabase.HandleStmtError(SQLSetStmtOption(selectStmt, SQL_ROWSET_SIZE, MAX_ROWSET_SIZE), selectStmt, errorBoxTitle)
    If (clsODBCDatabase.HandleStmtError(SQLExecDirect(selectStmt, selectString, Len(selectString)), selectStmt, errorBoxTitle) <> enumODBCSQLAPIResult.SQL_SUCCESS) Then
    GoTo LoadExit
    End If
    ' Cursor through result set. Each time we fetch data we get a SET of rows.
    sqlResult = clsODBCDatabase.HandleStmtError(SQLExtendedFetch(selectStmt, SQL_FETCH_NEXT, 0, rowsFetched, rowStatus(0)), selectStmt, errorBoxTitle)
    Do While (sqlResult = enumODBCSQLAPIResult.SQL_SUCCESS)
    ' Read all rows in the row set
    For row = 1 To rowsFetched
    If rowStatus(row - 1) = SQL_ROW_SUCCESS Then
    sqlResult = clsODBCDatabase.HandleStmtError(SQLSetPos(selectStmt, row, SQL_POSITION, SQL_LOCK_NO_CHANGE), selectStmt, errorBoxTitle)
    Call clsODBCDatabase.SQLGetShortField(selectStmt, 1, assessmentRollYear(row - 1))
    Console.WriteLine(assessmentRollYear(row - 1).ToString)
    End If
    Next
    ' If the rowset we just retrieved contains the maximum number of rows allowed, there could be more data.
    If rowsFetched = MAX_ROWSET_SIZE Then ' there could be more data
    sqlResult = clsODBCDatabase.HandleStmtError(SQLExtendedFetch(selectStmt, SQL_FETCH_NEXT, 0, rowsFetched, rowStatus(0)), selectStmt, errorBoxTitle)
    Else
    Exit Do ' no more rowsets
    End If
    Loop ' Do While (sqlResult = enumODBCSQLAPIResult.SQL_SUCCESS)
    The test fails on the call to SQLSetPos.  The error message I get is "Invalid cursor position; no keyset defined".  I have tried passing SET_POSITION and also SET_REFRESH.  Same error.  There has to be a way to do this!
    Thank you for your help!
    Thank You! - Andy

    Hi Apelkey,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Use of Vendor and Customer in 'Define Shipping Data for Storage Location"

    Under the stock transfer order configuration, there is a IMG step <b>"Define Shipping Data for Storage Location".</b>
    In this at storage location level, we can assign, sales org/distribution channel, division and VENDOR and CUSTOMER to storage location.
    The customer number is used a sold-to-party in outbound deliveries.
    <b>But I am not able to figure out what's the use of VENDOR number we are specifying in this step.</b>
    If I am assiging the VENDOR Number here to every storage location, then when I am creating cross-company code Stock transport order, which vendor code I will select the one assigned to PLANT or one Assigned to Storage location.
    <b>Also I need to know what vendor account group I will use for creating this storage location vendor.</b>  I have more than one storage locations) under one plant. Soif I used account group as "0007" plant, SAP doesn't allow me to assigne one plant code to 2 vendors and in vendor master on 'additional purchasing data' screen I am not seeing storage location field, it has only 'PLANT' field.
    <b>So I need to know-
    1. Which account group to use for creating storage location vendor
    2. What's the ultimaye use of this vendor code
    3. What vendor # I will use while creating cross-company PO (one for plant or one for storage location.</b>
    Thanks

    Hi,
    Define Shipping Data
    Here in this step you maintain the customer number of the receiving plant. This customer number is used in SD shipping processing to identify the goods recipient (ship-to party) if stock transfer to be carried out with an SD delivery
    A goods receipt can be planned in the receiving plant.
    You can enter a vendor (transport vendor) in the stock transport order
    In IMG step "Define Shipping Data for Storage Location".
    You can assign sales org/distribution channel, division and VENDOR and CUSTOMER to storage location through Plant only.Means this Vendor is created for entering in the STO .This vendor is created for the the Plant (Supplying Plant)so as u can create STO.
    Rewards If usful
    Regards
    Sanjay L

  • Adobe form-Able to post  data using Adobe Reader 9 but not with Adobe Proff

    Hello Guru's
    I am facing one problem with adobe forms.
    We have develoed a adobe form using adobe reader 9.
    Now when user are posting the purchase requistion using the form,they are able to post the data using
    Adobe Reader 9 but not with Adobe Reader professional.
    Can anyone please advice me what can be the problem here.

    Adobe Reader 9 can't save the old FDA forms. FDA must update their forms.

  • Is it possible to read digital data using an external clock (PCI-6259 M)?

    I’m using a NI PCI-6259 M Series card and trying to write my program in VC++6.0 using the functions in the DAQmx driver.
    Question1: Not all functions listed in the NI-DAQmx C Reference Help seems to be supported by my NI-card, where can I find information about which of the functions that are supported?
    Question2: I want to read data from a device that clock out data on the falling edge of a clock signal. The clock signal and the data signal are routed to two DIO terminals on the NI-card. The question is if it is possible to read data using the clock as a sample clock? See two code examples below that doesn’t work. In both cases 10 samples are read at once, even if the external clock is not present.
    Example 1
    // Create tasks
    Status = DAQmxCreateTask("", &m_ReadTrimTask);
    // Set up read task
    status = DAQmxCreateDIChan(m_ReadTrimTask, "Dev1/port2/line0", "", DAQmx_Val_ChanPerLine);
    status = DAQmxCfgChangeDetectionTiming(m_ReadTrimTask,"Dev1/port2/line6","Dev1/port2/line6",DAQmx_Val_FiniteSamps, 10);
    // Read data
    int32 sampsPerChanRead, numBytesPerSamp;
    status = DAQmxReadDigitalLines(m_ReadTrimTask, 10, 10.0, DAQmx_Val_GroupByChannel, result, 10, &sampsPerChanRead, &numBytesPerSamp ,NULL);
    Example 2
    // Create tasks
    Status = DAQmxCreateTask("", &m_ReadTrimTask);
    // Set up read task
    status = DAQmxCreateDIChan(m_ReadTrimTask, "Dev1/port2/line0", "", DAQmx_Val_ChanPerLine);
    status = DAQmxSetSampTimingType(m_ReadTrimTask, DAQmx_Val_SampClk);
    status = DAQmxSetSampClkRate(m_ReadTrimTask, 1000.0);
    status = DAQmxSetSampClkActiveEdge(m_ReadTrimTask, DAQmx_Val_Falling);
    status = DAQmxSetSampClkSrc(m_ReadTrimTask, " Dev1/port2/line6");
    // Read data
    int32 sampsPerChanRead, numBytesPerSamp;
    status = DAQmxReadDigitalLines(m_ReadTrimTask, 10, 10.0, DAQmx_Val_GroupByChannel, result, 10, &sampsPerChanRead, &numBytesPerSamp ,NULL);

    Hello Magnus,
    Thank you for contacting National Instruments.
    "Question1: Not all functions listed in the NI-DAQmx C Reference Help seems to be supported by my NI-card, where can I find information about which of the functions that are supported?"
    The best place to look for this information would be the M Series Help Manual. There you can find the features of your PCI-6259 and what operations it supports.
    "Question2: I want to read data from a device that clock out data on the falling edge of a clock signal. The clock signal and the data signal are routed to two DIO terminals on the NI-card. The question is if it is possible to read data using the clock as a sample clock? See two code examples below that doesn’t work. In both cases 10 samples are read at once, even if the external clock is not present."
    Look at the "ContReadDigChan-ExtClk_Fn.c" example project which ships with the NI-DAQ driver. This is located at: C:\Program Files\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Digital\Read Values\Cont Read Dig Chan-Ext Clk.
    You will have to make some minor modifications to convert this to a finite acquisition, but that is simply a matter of changing the "sampleMode" parameter of the DAQmxCfgSampClkTiming() function. You will also have to route your clock signal to a PFI line and specify which line in your code.
    I hope this helps.
    Sean C.
    Applications Engineering
    National Instruments

  • How to read a file with data in Hierarchical Structure using XSD Schema

    Hi
    We have requirement in which we have to read a FIXED LENGTH file with FILE ADAPTER. File contains the data in hierarchical structure. Hierarchy in the file is identified by the first 3 characters of every line which could be any of these : 000,001,002,003 and 004. Rest files are followed after these. So structure is like:
    000 -- Header of File. Will come only once in file. Length of this line is 43 characters
    -- 001 -- Sub Header. Child for 000. Can repeat in file. Length of this line is 51 characters
    --- 002 -- Detail record. Child for 001. Can repeat multiple times in given 001. Length of this line is 43 characters 1353
    -- 003 -- Sub Footer record at same level of 001. Will always come once with 001 record. Child for 000. Length of this line is 48 characters
    004 -- Footer of file.At same level of 000. Will come only once in file. Length of this line is 48 characters
    Requirement is to create an XSD which should validate this Hierarchical Structure also i.e data should come in this hierarchy only else raise an error while parsing the file.
    Now while configuring the FILE ADAPTER to read this file we are using Native Schema UI to create the XSD to parse this structure using an example data file. But we are not able to create a valid XSD for this file which should validate the Hierarchy also on the file.
    Pls provide any pointers or solution for this.
    Link to download the file, file structure details and XSD that we have created:
    https://docs.google.com/file/d/0B9mCtbxc3m-oUmZuSWRlUTBIcUE/edit?usp=sharing
    Thanks
    Amit Rattan
    Edited by: user11207269 on May 28, 2013 10:16 PM
    Edited by: user11207269 on May 28, 2013 10:31 PM
    Edited by: user11207269 on May 28, 2013 10:33 PM

    Heloo.. Can anyone help me on this. I need to do Hierarchial read / validation while reading the file using File Adapter using Native XSD schema.

  • Writing the file using Write to SGL and reading the data using Read from SGL

    Hello Sir, I have a problem using the Write to SGL VI. When I am trying to write the captured data using DAQ board to a SGL file, I am unable to store the data as desired. There might be some problem with the VI which I am using to write the data to SGL file. I am not able to figure out the minor problem I am facing.  I am attaching a zip file which contains five files.
    1)      Acquire_Current_Binary_Exp.vi -> This is the VI which I used to store my data using Write to SGL file.
    2)      Retrive_BINARY_Data.vi -> This is the VI which I used to Read from SGL file and plot it
    3)      Binary_Capture -> This is the captured data using (1) which can be plotted using (2) and what I observed is the plot is different and also the time scare is not as expected.
    4)      Unexpected_Graph.png is the unexpected graph when I am using Write to SGL and Read from SGL to store and retrieve the data.
    5)      Expected_Graph.png -> This is the expected data format I supposed to get. I have obtained this plot when I have used write to LVM and read from LVM file to store and retrieve the data.
    I tried a lot modifying the sub VI’s but it doesn’t work for me. What I think is I am doing some mistake while I am writing the data to SGL and Reading the data from SGL. Also, I don’t know the reason why my graph is not like (5) rather I am getting something like its in (4). Its totally different. You can also observe the difference between the time scale of (4) and (5).
    Attachments:
    Krishna_Files.zip ‏552 KB

    The binary data file has no time axis information, it is pure y data. Only the LVM file contains information about t(0) and dt. Since you throw away this information before saving to the binary file, it cannot be retrieved.
    Did you try wiring a 2 as suggested?
    (see also http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=925 )
    Message Edited by altenbach on 07-29-2005 11:35 PM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    Retrive_BINARY_DataMOD2.vi ‏1982 KB

  • Using a range of dates for Key Date

    In a HR Bi data warehouse, we have a position-to-position hierarchy, where each of the nodes are time dependent. So, it shows for each node,  valid from and valid to dates, and all the employees who are reporting to that position. This hierarchy is built on the infoobject 0HRPOSITION, which is maintained in R/3 and extracted to BI.
    Let us take an example: Position 1000 is valid from 1-1-2006 to 6-30-2006 Employees reporting to this position are A,B,C,D
                                           Position 1000 is valid from 7-1-2006 to 12-31-9999 Employees reporting to this position are A,E,F,G
    When a user chooses the position 1000, and date range 1-1-2006 to 12-31-2006, it show the complete list of employees as
    A,B,C,D,E,F,G.
    Because the Keydate can only be a single value, and it is automatically taking today's date, and pulling the nodes based on that.
    I have created a hierarchy node variable on the 0HRPOSITION infoObject, and entered the value 1000, with no value for the keydate.
    The system is simply showing employees, A,E,F and G. That is my problem
    My requirement is this: I like to be able to give a date range, (for the hierarchy)  say from 1-1-2006 to 12/31/2006 and get the complete list of Employees, which is A,B,C,D,E,F,G.
    Is this possible? Can I change the way this hierarchy is defined so that I can pull the possible values for a range?

    Thank you Ajay.
    After some thinking, I have realized that these options will not work.
    We have a position-to-position hierarchy that shows who reports to who in the organization. This hierarchy is built on the Infoobject 0HRPOSITION.  Each node in this hierarchy has is time-dependent. Note that, the entire hierarchy is not timedependent. Only the individual position nodes are time-dependent.
    This 0HRPOSITION infoobject exists in the  Heacount cube as one of the characteristics. Here is my requirement.
    1. I want to show in a report, all the employees (directly or indirectly) reporting to a manager for a period of say, 1 year?
    I know that I can specify a key date for the hierarchy 0HRPOSITION, then the report will show all the employees (direct and indirect) reporting to a position say 6/30/2008. I don't want this for a specific date, I want to get  ALL the employees (direct and indirect) reporting to a position in a range of dates( say 1 year)
    Does that make sense? How do we achieve this goal?

  • What component is included with Microsoft SQL Servers that can be used to perform a broad range of data migration tasks?

    What component is included with Microsoft SQL Servers that can be used to perform a broad range of data migration tasks?
    a. Full Text Search service
    b. SQL Notification Server
    c. SQL Reporting Server
    d. SQL Server Integration Services

    d.
    Are you having a test and trying to cheat?
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

Maybe you are looking for

  • New PC Build for CS5 Master Suite

    Hi, I'm building a new PC to use with the CS5 Master  Suite of programs (amongst others) and am looking for a little  advice. I've already read through lots of posts here and got some  valuable advice, however there are so many different options avai

  • When opening a new tab, how do I default to my home page?

    When I manually open a new tab I always get a blank window. I want the new tab to default to my home page when I open it. IE does this, why not Firefox?

  • TS130 Two sets of raid

    Hi, i am running a ts130 (xeon) with two ssd's in raid 1 mirror...I want to add another two drives to run in raid 1 mirrors so i will have a total of 4 drives.I dont need the dvd drive so i can use that sata port.....can this be done?  Thanks for you

  • Hi gurus...Problems with french, german, swedish, italian characters

    My configuration is SunOS 5.7(Generic October 1998),WLS 6.0(sp1). If someone put a french or german or etc. character in an input of my page (simply HTML) my servlet read it like an 'ý' or '?' I know this is a known problem, I'm trying all solution i

  • Error 'Unit ST is not created in language EN...' at conversion exit

    Hi When i try to load a flat file i have this error message : Error 'Unit ST is not created in language EN...' at conversion exit CONVERSION_EXIT_CUNIT_INPUT (field SALES_UNIT record 1, value ST) My source has unit field. in the field in have sometim