SCAN_Op returns one less sample than expected.

I'm acquiring data from a SCXI chassis using a WinXP P4 computer. Sometimes when I run my app, SCAN_Op returns one less sample than it should. Looking at the data array, it appears that one sample is missing down in the array where there should be a reading from the first channel. The rest of the readings are all shifted up by one and the first of these is garbage. The rest are correct. This only happens some of the time, but when it does, it happens no matter how many modules I'm using in the chassis (I've tried up to three at once). It also happens using just one high voltage module, As a result, the data I send to SCAN_Demux is incorrect and comes out scrambled. I've checked my inputs to the function repeatedly and am confid
ent they are correct. I am in debug mode when this happens. So far, I haven't seen it happen in release mode. Has anyone seen this condition?

Hello,
I am not sure exactly what could be causing this problem. However, we could try something.
Try allocating your memory as an i32 array that is half the size of the array that you want. This will force memory to be allocated slightly differently. Then just typecast your array whenever you need to use it.
I know this has fixed some alignment issues in the past.
Let us know if this does not work.
Best regards,
Justin T.
National Instruments

Similar Messages

  • Java matcher - one less result than expected

    Hi all,
    I have a question about the Java matcher, which may have a very simple answer, but after a while trying to find the answer (through debugging and googling), I have come up with no solution.
    Basically, I am trying to parse through CSV files and find text matching a user-defined String that is entered. I have written a small CSV file and have written many instances of one word (for instance "java"). However when I use the matcher.find() method upon the content of the CSV file, it always finds one less match than are actually in the file! (e.g. if there are 6 instances of the word 'java', the matcher finds 5). I have printed the contents of the CSV file to the screen and the whole file is correctly displayed, so it must be something to do with the way the matcher works. Here is my code, although it is only basic at the moment:
            Pattern pattern = Pattern.compile("java", Pattern.CASE_INSENSITIVE);
            Matcher matcher = pattern.matcher(pageContents); // this is a String containing the contents of the CSV file.
            ArrayList<String> allMatches = new ArrayList<String>();
            if (!matcher.find()) {
                System.err.println("\nNo Matching Data Could Be Found In This Text File.\n");
                System.exit(0);
         while (matcher.find()) {           
                String individualMatch = matcher.group().trim();
                System.out.println(individualMatch);
                allMatches.add(individualMatch);
            return allMatches;So basically I am returning an ArrayList of Strings which contains all matches of the word "java" (just as an example).
    Any help would be greatly appreciated!
    Thanks,
    Jon

    if (!matcher.find()) {  // This line matches the first one, and throws it away
    System.err.println("\nNo Matching Data Could Be Found In This Text File.\n");
    System.exit(0);
         while (matcher.find()) {  // .. since you call find here again.         
    String individualMatch = matcher.group().trim();
    System.out.println(individualMatch);
    allMatches.add(individualMatch);
    return allMatches;Kaj

  • BAPI_CUSTOMER_GETLIST returning one less result when multiples of 100...

    Good day, All
    We call BAPI_CUSTOMER_GETLIST to return address information (ADDRESSDATA) for a group of customer records (IDRANGE).  Problem is that when the number of customer records is 201, 301, 401, 501, ... the address records that are returned are missing one.  We get 200, 300, 400, 500, ... records returned.
    I'm just wondering if anybody else has seen this and what you may have done to avoid this.
    Thanks,
    Chad

    Thanks for the reply Jonathan.
    What you are saying makes perfect sense except when we remove one record from the IDRANGE list going from 201 to 200 then we get the expected 200 record results in ADDRESSDATA and SPECIALDATA.  When we run that same removed record all by itself it does in fact return a single result line in both ADDRESSDATA and SPECIALDATA.
    I did find a Support Note 882460 that mentions something similiar.  Could this be the solution that we need.  Our version of R/3 is 4.7

  • Number of address book contacts on MacBook Air using iCloud is one less than my other iDevices using iCloud.  Is this a known bug?

    Number of address book contacts on MacBook Air using iCloud is one less than my other iDevices using iCloud.  Is this a known bug?  I've checked for duplicates on my iDevices and I can't find any.  Additionally, I have wiped all the data from the Application Support folder of the address book and re-connected to iCloud, but it always shows 308 contacts instead of 309 found on iCloud and the rest of my devices.  Any ideas?

    Number of address book contacts on MacBook Air using iCloud is one less than my other iDevices using iCloud.  Is this a known bug?  I've checked for duplicates on my iDevices and I can't find any.  Additionally, I have wiped all the data from the Application Support folder of the address book and re-connected to iCloud, but it always shows 308 contacts instead of 309 found on iCloud and the rest of my devices.  Any ideas?

  • Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

    Hello,
    When I run the query as below
    DECLARE @LoopCount int
    SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
    WHILE (
        SELECT Count(*)
        FROM KC_PaymentTransactionIDConversion with (nolock)
        Where KC_Transaction_ID is NULL
        and TransactionYear is NOT NULL
    ) > 0
    BEGIN
        IF @LoopCount < 0
            RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
                   16, -- Severity.
                   1 -- State.
    SET @LoopCount = @LoopCount - 1
    end
    I am getting error as "loop has run more times than expected (Loop Counter went negative)"
    Could any one help on this issue ASAP.
    Thanks ,
    Vinay

    Hi Vinay,
    According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
    then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
    To fix this issue with the current information, we should make the following modification:
    Change the code
    WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
    ) > 0
    To
    WHILE @LoopCount > 0
    Besides, since the current query is senseless, please modify the query based on your requirement.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Inconsistent Accessibilty: parameter type 'CRUDApplication.Models.IEmployeeRepository' is less accessable than method 'CRUDApplication.Controllers.EmployeeController.EmployeeController'

    Am getting this error in my code
    Inconsistent accessibility: parameter type 'CRUDApplication.Models.IEmployeeRepository' is less accessible than method 'CRUDApplication.Controllers.EmployeeController.EmployeeController(CRUDApplication.Models.IEmployeeRepository)'   
    Here's my code
    // EmployeeController.cs
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Mvc;
    using CRUDApplication.Models;
    using System.Data;
    namespace CRUDApplication.Controllers
        public class EmployeeController : Controller
            // GET: /Employee/
             private IEmployeeRepository _repository;
            public EmployeeController()
                : this(new EmployeeRepository())
            public EmployeeController(IEmployeeRepository repository)
                _repository = repository;
            public ActionResult Index()
                var employee = _repository.GetEmployee();
                return View(employee);
            public ActionResult Details(int id)
                EmployeeModel model = _repository.GetEmployeeByID(id);
                return View(model);
            public ActionResult Create()
                return View(new EmployeeModel());
            [HttpPost]
            public ActionResult Create(EmployeeModel employee)
                try
                    if (ModelState.IsValid)
                        _repository.InsertEmployee(employee);
                        return RedirectToAction("Index");
                catch (DataException)
                    ModelState.AddModelError("", "Can't be Saved!");
                return View(employee);
            public ActionResult Edit(int id)
                EmployeeModel model = _repository.GetEmployeeByID(id);
                return View(model);
            [HttpPost]
            public ActionResult Edit(EmployeeModel employee)
                try
                    if (ModelState.IsValid)
                        _repository.UpdateEmployee(employee);
                        return RedirectToAction("Index");
                catch (DataException)
                    ModelState.AddModelError("", "Can't be Saved!");
                return View(employee);
            public ActionResult Delete(int id, bool? saveChangesError)
                if (saveChangesError.GetValueOrDefault())
                    ViewBag.ErrorMessage = "Can't be Deleted!";
                EmployeeModel employee = _repository.GetEmployeeByID(id);
                return View(employee);
            [HttpPost, ActionName("Delete")]
            public ActionResult DeleteConfirmed(int id)
                try
                    EmployeeModel user = _repository.GetEmployeeByID(id);
                    _repository.DeleteEmployee(id);
                catch (DataException)
                    return RedirectToAction("Delete",
                    new System.Web.Routing.RouteValueDictionary {
              { "id", id },
              { "saveChangesError", true } });
                return RedirectToAction("Index");
    // IEmployeeRepository.cs
    namespace CRUDApplication.Models
          interface IEmployeeRepository
            IEnumerable<EmployeeModel> GetEmployee();
            EmployeeModel GetEmployeeByID(int Emp_ID);
            void InsertEmployee(EmployeeModel emp_Model);
            void DeleteEmployee(int Emp_ID);
            void UpdateEmployee(EmployeeModel emp_Model);
    // EmployeeRepository.cs
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    namespace CRUDApplication.Models
        public class EmployeeRepository : IEmployeeRepository
            private EmployeeDataContext emp_DataContext;
            public EmployeeRepository()
                emp_DataContext = new EmployeeDataContext();
            public IEnumerable<EmployeeModel> GetEmployee()
                IList<EmployeeModel> employeeList = new List<EmployeeModel>();
                var myQuery = from q in emp_DataContext.EmployeeTabs
                              select q;
                var emp = myQuery.ToList();
                foreach (var empData in emp)
                    employeeList.Add(new EmployeeModel()
                        ID = empData.ID,
                        Emp_ID = empData.Emp_ID,
                        Name = empData.Name,
                        Dept = empData.Dept,
                        City = empData.City,
                        State = empData.State,
                        Country = empData.Country,
                        Mobile = empData.Mobile
                return employeeList;
            public void InsertEmployee(EmployeeModel emp_Model)
                var empData = new EmployeeTab()
                    Emp_ID = emp_Model.Emp_ID,
                    Name = emp_Model.Name,
                    Dept = emp_Model.Dept,
                    City = emp_Model.City,
                    State = emp_Model.State,
                    Country = emp_Model.Country,
                    Mobile = emp_Model.Mobile
                emp_DataContext.EmployeeTabs.InsertOnSubmit(empData);
                emp_DataContext.SubmitChanges();
            public void DeleteEmployee(int Emp_ID)
                EmployeeTab employee = emp_DataContext.EmployeeTabs.Where(u => u.ID == Emp_ID).SingleOrDefault();
                emp_DataContext.EmployeeTabs.DeleteOnSubmit(employee);
                emp_DataContext.SubmitChanges();
            public void UpdateEmployee(EmployeeModel emp_Model)
                EmployeeTab EmpData = emp_DataContext.EmployeeTabs.Where(u => u.ID == emp_Model.ID).SingleOrDefault();
                EmpData.Name = emp_Model.Name;
                EmpData.Dept = emp_Model.Dept;
                EmpData.City = emp_Model.City;
                EmpData.State = emp_Model.State;
                EmpData.Country = emp_Model.Country;
                EmpData.Mobile = emp_Model.Mobile;
                emp_DataContext.SubmitChanges();

    You have a ctor on EmployeeController that is public and therefore callable by anyone.  However it accepts an IEmployeeRepository which is not a public type. Therefore it will not compile. You can fix this one of several ways:
    Make IEmployeeRepository public
    IEmployeeRepository is most likely marked as internal so mark the EmployeeController ctor as internal as well.  Chances are this was done for unit testing so if you mark it internal then your unit test project won't find it anymore.  To work around
    that add
    InternalsVisibleTo attribute to your repository assembly as well.  The parameter will be the name of your unit test project.  This allows the unit test project to find the internal ctor.
    Michael Taylor
    http://blogs.msmvps.com/p3net

  • Sequence creating large gaps than expected

    Oracle 11g, windows server2008
    Hi
    How can I find out which table columns used a sequence in my database (in past 2 or 3 days for example)
    This is needed because the sequence increments by 1 BUT while doing a query on table column which use this sequence, it was found that large gaps are being produced in between the numbers in the table column using this sequence.
    The cache is 20 .
    We dont need consecutive numbers, just curious why the sequence is producing bigger gaps than expected.
    What are the reasons for  the sequence cache being cleared, other than shutdown,rollback&export/import of the database?
    The gap is not uniform , sometimes there is a gap of 20.. sometimes 17 or 100, i.e uneven numbers with no pattern..and while searching the codes,packages etc, to see where the sequence is being used, its only this one table column  but there culd be some other column using this, so would like to know that and that could be the reason why the sequence is having large gaps like this.
    (E.g. for producing a 100 records, it used 1000 numbers etc..)
    Thanks very much

    Krithi wrote:
    Hi
    Can you please expand it a bit more, as its not familiar for me
    1)After enabling auditing using the syntax(logged in as sysdba i suppose),where will I see the result and how to check the result?
    DBA_AUDIT_TRAIL
    2)How to disable auditing once I have done with this?
    You have the syntax for turning on the audit, the same reference manual that documents that also documents how to turn it off
    Contents
    Here's a bit of fatherly advice to help you in your career.  When you are given a clue like a particular sql statement, or mention of a feature like 'audit', spend a bit of time in the documentation to see what you can learn of it - before coming back and asking more about it.
    =================================================
    Learning how to look things up in the documentation is time well spent investing in your career.  To that end, you should drop everything else you are doing and do the following:
    Go to  docs.oracle.com.
    Locate the link for your Oracle product and version, and click on it.
    You are now at the entire documentation set for your selected Oracle product and version.
    BOOKMARK THAT LOCATION
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab (for 10.x) or the "Master Book List" link (for 11.x) you will find the complete documentation library.
    Spend a few minutes just getting familiar with what kind of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth.  They are reference manuals.  Just get familiar with what is there to be referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - *Read a chapter a day from the Concepts Manual*.
    - Take a look in your alert log.  One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - *When you have finished reading the Concepts Manual, do it again*.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.
    =================================

  • Why offer less functionality than iPhoto?

    It seems like this new app offers less functionality than iPhoto, save for its photo-manipulation tools. I can't edit location data, half of the toolbar options are greyed-out and inaccessible, I can't adjust the sort order, there are no events or ways at least to adjust where one moment begins and another ends. More and more Photos is revealing itself to be still quite in the beta stage and was released prematurely, which is odd for Apple, who I feel normally is highly concerned with the details. When I view imported photos in moments on my iPhone, the time stamps are incorrect, and it appears that photos are in the order of their last-edited time rather than when they were shot, which completely contrasts with the whole idea of moments. The hidden photos album and the way its "hidden" is nice, but I feel the lack of a standard "trash" or "deleted" area is cumbersome. Half the time when editing a photo the changes don't go into effect, and there's no way to access the original through finder. I simply don't understand why it appears this unnecessary, albeit beautiful, app was rushed out the door.

    Have you tried to return functionality to iPhoto in your computer? If the application is still
    in the OS X applications, you may be able to still use it. Depending on how you upgraded
    to Yosemite, will have some bearing on the fate of iPhoto. The App Store does not have it
    and the last iPhoto version would've had to be updated to in Mavericks or pre OS X 10.10.
    A full system clone of the previous OS X and its applications suite allows one to run it from
    another partition or a different drive. Sadly, these ideas arrive too late or without sufficient
    information to initiate the process early and save this option before it evaporates.
    In my opinion, reliance on the Cloud is akin to vaporware; if you cannot retain an offline
    archive of your images in several reliable repositories where they may be kept available
    to the owner, without paying a storage fee to an unknown entity at some untold distance
    then some of your ownership rights are lost and you have to pay to use it, too.
    Before upgrading to a later version of an operating system it is always wise to have a backup
    that actually can run the computer in the older operating system, with all its applications...
    Though upgrades can be a good thing, some don't realize an upgrade goes beyond update.
    To see your created content be divorced from your control is not a great way to find this out.
    So this is a cautionary tale, to backup well beyond the basic Time Machine, since you may
    not be able to restore a fully working system if you haven't an online recovery method to
    get the main segments; unless you saved them ahead of time in an archive you control.
    PS: basic photo editing such as graphic converter, toyviewer, and other software can be
    helpful to modify and process images; & can do more than they appear. Inexpensively.
    In any event...
    Good luck & happy computing!

  • Nothing simpler, smaller, less expensive than Apple's DVI to ADC adapter?

    My 2001 vintage 466 MHz PowerMac G4 with 15" Studio Display has served me well over the years. But, alas, she's running Panther (10.3.9) and lacks the requirements to move on to Leopard, which I'm ready to do. So, I decided to upgrade the machine. Since the other parts of the system (monitor, keyboard, mouse & printer) were just fine, I decided that a new Mac Mini would fit the bill nicely. (I'm also a Scotsman, always looking for ways to save money. Perhaps that's part of the problem. )
    I received it yesterday and wasn't expecting the ADC-monitor to DVI-computer dilemma. (Which shows you just how out of touch I am with technology's incessant march forward.) So, I investigate solutions.
    Imagine my surprise when I discover that the only solution out there is by Apple, in the form of a brick almost as big as my Mini! And, another $100 to boot! I did find a third party solution through a link on Apple's website, but that's $150! (Thanks for the referral, Apple!) My mind is boggled - there must be other available solutions but, if there are, they are evading me.
    Does anyone know of any solution that is simpler, smaller and less expensive than Apple's DVI to ADC adapter?
    Thanks for any and all info.

    The Apple adapter is the only solution if you really want to keep that display. It might be a good time to look for a larger display, with DVI.
    I am using the Apple adapter with my seven year old 22" display, but I wouldn't want to run OS X on anything smaller.

  • Report Queries - Multiple Source queries - One source returns & one doesn't

    Hi Folks.
    Odd one here.
    I have a report query entry in the Shared Components of the APEX UI.
    Within the Query are two source queries.
    Both return unique column names.
    Both use the same bind variable.
    When generating a PDF using the call to the report, the data from one query is being returned and the data from another is not.
    I have generated a sample XML file from APEX which is populated with data for both queries.
    If I import this into MS-Word using the BIP plugin and generate a preview all is fine. All fields using data from both source queries are populated.
    When generated via a regular call within APEX it simply does not work. I only get the data from one query.
    Does anyone have any suggestions?
    Anyone had a similar thing?
    Any comments/suggestions welcome.
    Many thanks
    Dogfighter.
    Message was edited by:
    Dogfighter
    Message was edited by:
    Dogfighter

    Hi Marc
    Does this make sense to you?
    I have now set up a simple report query and report layout.
    The record ID is hard coded into the query so that it only returns one row.
    I am using an RTF based template.
    If I navigate to Shared Components > Report Queries > Edit Report Query and then press the 'Test Report' button it runs perfectly.
    If I copy the 'Print URL:' value from this page and use it as the URL target on any page in the app. it also works perfectly.
    If I try and execute the following...
    update invoice_summary
    set invoice_pdf = APEX_UTIL.GET_PRINT_DOCUMENT (127, -- App ID
    'TEST_INVOICE', -- query name
    'TEST_INVOICE') -- layout name
    where invoice_summary_gsm_id = a.INVOICE_ID;
    I get the pdf saved to the BLOB column with the field labels in place but no data.
    Three ways of running the report.
    Two of them work perfectly but the one I want (generate direct into BLOB column) does not. Why would the exact same report query & report layout work with the other two methods but not the GET_PRINT_DOCUMENT route?
    It's not as if the pdf does not get into the BLOB column, it does, but it only has the labels and no values.
    Any ideas?
    Simon.
    PS. I only have one report query and one report layout set up so as to avoid confusion. Both with the same name.
    Message was edited by:
    Dogfighter

  • How just return one row of a one to many join..

    So I have a one to many join where the SMOPERATOR table has data I need however it has a couple of rows that match the JOIN condition in there. I just need to return one row. I think this can be accomplished with a subquery in the join however have not been able to come up with the right syntax to do so.
    So:
    SELECT "NUMBER" as danumber,
    NAME,
    SMINCREQ.ASSIGNMENT,
    SMOPERATOR.PRIMARY_ASSIGNMENT_GROUP,
    SMOPERATOR.WDMANAGERNAME,
    SMINCREQ.owner_manager_name,
    SMINCREQ.subcategory, TO_DATE('01-'||TO_CHAR(open_time,'MM-YYYY'),'DD-MM-YYYY')MONTHSORT,
    (CASE WHEN bc_request='f' THEN 'IAIO'
    WHEN (bc_request='t' and substr(assignment,1,3)<>'MTS') THEN 'RARO'
    WHEN (bc_request='t' and substr(assignment,1,3)='MTS') THEN 'M'
    ELSE 'U' end) as type
    from SMINCREQ
    left outer join SMOPERATOR on SMINCREQ.assignment=SMOPERATOR.primary_assignment_group
    WHERE SMINCREQ.owner_manager_name=:P170_SELECTION and SMOPERATOR.wdmanagername=:P170_SELECTION
    AND open_time BETWEEN to_date(:P170_SDATEB,'DD-MON-YYYY') AND to_date(:P170_EDATEB,'DD-MON-YYYY')
    AND
    (bc_request='f' and subcategory='ACTIVATION' and related_record<>'t')
    OR
    (bc_request='f' and subcategory<>'ACTIVATION')
    OR
    (bc_request='t' and substr(assignment,1,3)<>'MTS')
    order by OPEN_TIMe

    Hi,
    This sounds like a Top-N Query , where you pick N items (N=1 in this case) off the top of an orderded list. I think you want a separate ordered list for each assignment; the analytic ROW_NUMBER function does that easily.
    Since you didn't post CREATE TABLE and INSERT statements for your sample data, I'll use tables from the scott schema to show how this is done.
    Say you have a query like this:
    SELECT       d.dname
    ,       e.empno, e.ename, e.job, e.sal
    FROM       scott.dept  d
    JOIN       scott.emp   e  ON   d.deptno = e.deptno
    ORDER BY  dname
    ;which produces this output:
    DNAME               EMPNO ENAME      JOB              SAL
    ACCOUNTING           7934 MILLER     CLERK           1300
    ACCOUNTING           7839 KING       PRESIDENT       5000
    ACCOUNTING           7782 CLARK      MANAGER         2450
    RESEARCH             7876 ADAMS      CLERK           1100
    RESEARCH             7902 FORD       ANALYST         3000
    RESEARCH             7566 JONES      MANAGER         2975
    RESEARCH             7369 SMITH      CLERK            800
    RESEARCH             7788 SCOTT      ANALYST         3000
    SALES                7521 WARD       SALESMAN        1250
    SALES                7844 TURNER     SALESMAN        1500
    SALES                7499 ALLEN      SALESMAN        1600
    SALES                7900 JAMES      CLERK            950
    SALES                7698 BLAKE      MANAGER         2850
    SALES                7654 MARTIN     SALESMAN        1250Now say you want to change the query so that it only returns one row per department, like this:
    DNAME               EMPNO ENAME      JOB              SAL
    ACCOUNTING           7782 CLARK      MANAGER         2450
    RESEARCH             7876 ADAMS      CLERK           1100
    SALES                7499 ALLEN      SALESMAN        1600where the empno, ename, job and sal columns on each row of output are all taken from the same row of scott.emp, though it doesn't really matter which row that is.
    One way to do it is to use the analytic ROW_NUMBER function to assign a sequence of unique numbers (1, 2, 3, ...) to all the rows in each department. Since each sequence startw with 1, and the numbers are unique within a department, there will be exactly one row per departement that was assigned the numebr 1, and we''ll display that row.
    Here's how to code that:
    WITH     got_r_num     AS
         SELECT     d.dname
         ,     e.empno, e.ename, e.job, e.sal
         ,     ROW_NUMBER () OVER ( PARTITION BY  d.dname
                                   ORDER BY          e.ename
                           )         AS r_num
         FROM     scott.dept  d
         JOIN     scott.emp   e  ON   d.deptno = e.deptno
    SELECT       dname
    ,       empno, ename, job, sal
    FROM       got_r_num
    WHERE       r_num     = 1
    ORDER BY  dname
    ;Notice that he sub-query got_r_num is almost the same as the original query; only it has one additional column, r_num, in the SELECT clause, and the sub-qeury does not have an ORDER BY clause. (Sub-queries almost never have an ORDER BY clause.)
    The ROW_NUMBER function must have an ORDER BY clause. In this example, I used "ORDER BY ename", meaning that, within each department, the row with the first ename (in sort order) will get r_num=1. You can use any column, or expression, or expressions in the ORDER BY clause. You muight as well use something consistent and predictable, like ename, but if you really wanted arbitrary numbering you could use a constant in the analytic ORDER BY clause, e.g. "ORDER BY NULL".

  • Lock Up Your Data for Up to 90% Less Cost than On-Premises Solutions with NetApp AltaVault

    June 2015
    Explore
    Data-Protection Services from NetApp and Services-Certified Partners
    Whether delivered by NetApp or by our professional and support services certified partners, these services help you achieve optimal data protection on-premises and in the hybrid cloud. We can help you address your IT challenges for protecting data with services to plan, build, and run NetApp solutions.
    Plan Services—We help you create a roadmap for success by establishing a comprehensive data protection strategy for:
    Modernizing backup for migrating data from tape to cloud storage
    Recovering data quickly and easily in the cloud
    Optimizing archive and retention for cold data storage
    Meeting internal and external compliance regulations
    Build Services—We work with you to help you quickly derive business value from your solutions:
    Design a solution that meets your specific needs
    Implement the solution using proven best practices
    Integrate the solution into your environment
    Run Services—We help you optimize performance and reduce risk in your environment by:
    Maximizing availability
    Minimizing recovery time
    Supplying additional expertise to focus on data protection
    Rachel Dines
    Product Marketing, NetApp
    The question is no longer if, but when you'll move your backup-and-recovery storage to the cloud.
    As a genius IT pro, you know you can't afford to ignore cloud as a solution for your backup-and-recovery woes: exponential data growth, runaway costs, legacy systems that can't keep pace. Public or private clouds offer near-infinite scalability, deliver dramatic cost reductions and promise the unparalleled efficiency you need to compete in today's 24/7/365 marketplace.
    Moreover, an ESG study found that backup and archive rank first among workloads enterprises are moving to the cloud.
    Okay, fine. But as a prudent IT strategist, you demand airtight security and complete control over your data as well. Good thinking.
    Hybrid Cloud Strategies Are the Future
    Enterprises, large and small, are searching for the right blend of availability, security, and efficiency. The answer lies in achieving the perfect balance of on-premises, private cloud, and public services to match IT and business requirements.
    To realize the full benefits of a hybrid cloud strategy for backup and recovery operations, you need to manage the dynamic nature of the environment— seamlessly connecting public and private clouds—so you can move your data where and when you want with complete freedom.
    This begs the question of how to integrate these cloud resources into your existing environment. It's a daunting task. And, it's been a roadblock for companies seeking a simple, seamless, and secure entry point to cloud—until now.
    Enter the Game Changer: NetApp AltaVault
    NetApp® AltaVault® (formerly SteelStore) cloud-integrated storage is a genuine game changer. It's an enterprise-class appliance that lets you leverage public and private clouds with security and efficiency as part of your backup and recovery strategy.
    AltaVault integrates seamlessly with your existing backup software. It compresses, deduplicates, encrypts, and streams data to the cloud provider you choose. AltaVault intelligently caches recent backups locally while vaulting older versions to the cloud, allowing for rapid restores with off-site protection. This results in a cloud-economics–driven backup-and-recovery strategy with faster recovery, reduced data loss, ironclad security, and minimal management overhead.
    AltaVault delivers both enterprise-class data protection and up to 90% less cost than on-premises solutions. The solution is part of a rich NetApp data-protection portfolio that also includes SnapProtect®, SnapMIrror®, SnapVault®, NetApp Private Storage, Cloud ONTAP®, StorageGRID® Webscale, and MetroCluster®. Unmatched in the industry, this portfolio reinforces the data fabric and delivers value no one else can provide.
    Figure 1) NetApp AltaVault Cloud-integrated Storage Appliance.
    Source: NetApp, 2015
    The NetApp AltaVault Cloud-Integrated Storage Appliance
    Four Ways Your Peers Are Putting AltaVault to Work
    How is AltaVault helping companies revolutionize their backup operations? Here are four ways your peers are improving their backups with AltaVault:
    Killing Complexity. In a world of increasingly complicated backup and recovery solutions, financial services firm Spot Trading was pleased to find its AltaVault implementation extremely straightforward—after pointing their backup software at the appliance, "it just worked."
    Boosting Efficiency. Australian homebuilder Metricon struggled with its tape backup infrastructure and rapid data growth before it deployed AltaVault. Now the company has reclaimed 80% of the time employees formerly spent on backups—and saved significant funds in the process.
    Staying Flexible. Insurance broker Riggs, Counselman, Michaels & Downes feels good about using AltaVault as its first foray into public cloud because it isn't locked in to any one approach to cloud—public or private. The company knows any time it wants to make a change, it can.
    Ensuring Security. Engineering firm Wright Pierce understands that if you do your homework right, it can mean better security in the cloud. After doing its homework, the firm selected AltaVault to securely store backup data in the cloud.
    Three Flavors of AltaVault
    AltaVault lets you tap into cloud economics while preserving your investments in existing backup infrastructure, and meeting your backup and recovery service-level agreements. It's available in three form factors: physical, virtual, and cloud-based.
    1. AltaVault Physical Appliances
    AltaVault physical appliances are the industry's most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Companies deploy AltaVault physical appliances in the data center to protect large volumes of data. These datasets typically require the highest available levels of performance and scalability.
    AltaVault physical appliances are built on a scalable, efficient hardware platform that's optimized to reduce data footprints and rapidly stream data to the cloud.
    2. AltaVault Virtual Appliances for Microsoft Hyper-V and VMware vSphere
    AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They're also perfect for enterprises that want to safeguard branch offices and remote offices with the same level of protection they employ in the data center.
    AltaVault virtual appliances deliver the flexibility of deploying on heterogeneous hardware while providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto VMware vSphere or Microsoft Hyper-V hypervisors—so you can choose the hardware that works best for you.
    3. AltaVault Cloud-based Appliances for AWS and Microsoft Azure
    For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are key to enabling cloud-based recovery.
    On-premises AltaVault physical or virtual appliances seamlessly and securely back up your data to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance in AWS or Azure and recover data in the cloud. Usage-based, pay-as-you-go pricing means you pay only for what you use, when you use it.
    AltaVault solutions are a key element of the NetApp vision for a Data Fabric; they provide the confidence that—no matter where your data lives—you can control, integrate, move, secure, and consistently manage it.
    Figure 2) AltaVault integrates with existing storage and software to securely send data to any cloud.
    Source: NetApp, 2015
    Putting AltaVault to Work for You
    Four common use cases illustrate the different ways that AltaVault physical and virtual appliances are helping companies augment and improve their backup and archive strategies:
    Backup modernization and refresh. Many organizations still rely on tape, which increases their risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability. AltaVault serves as a tape replacement or as an update of old disk-based backup appliances and virtual tape libraries (VTLs).
    Adding cloud-integrated backup. AltaVault makes a lot of sense if you already have a robust disk-to-disk backup strategy, but want to incorporate a cloud option for long-term storage of backups or to send certain backup workloads to the cloud. AltaVault can augment your existing purpose-built backup appliance (PBBA) for a long-term cloud tier.
    Cold storage target. Companies want an inexpensive place to store large volumes of infrequently accessed file data for long periods of time. AltaVault works with CIFS and NFS protocols, and can send data to low-cost public or private storage for durable long-term retention.
    Archive storage target. AltaVault can provide an archive solution for database logs or a target for Symantec Enterprise Vault. The simple-to-use AltaVault management platform can allow database administrators to manage the protection of their own systems.
    We see two primary use cases for AltaVault cloud-based appliances, available in AWS and Azure clouds:
    Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, AltaVault cloud-based appliances are key to enabling cloud-based disaster recovery. Via on-premises AltaVault physical or virtual appliances, data is seamlessly and securely protected in the cloud.
    Protect cloud-based workloads.  AltaVault cloud-based appliances offer an efficient and secure approach to backing up production workloads already running in the public cloud. Using your existing backup software, AltaVault deduplicates, encrypts, and rapidly migrates data to low-cost cloud storage for long-term retention.
    The benefits of cloud—infinite, flexible, and inexpensive storage and compute—are becoming too great to ignore. AltaVault delivers an efficient, secure alternative or addition to your current storage backup solution. Learn more about the benefits of AltaVault and how it can give your company the competitive edge you need in today's hyper-paced marketplace.
    Rachel Dines is a product marketing manager for NetApp where she leads the marketing efforts for AltaVault, the company's cloud-integrated storage solution. Previously, Rachel was an industry analyst for Forrester Research, covering resiliency, backup, and cloud. Her research has paved the way for cloud-based resiliency and next-generation backup strategies.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    You didn't say what phone you have - but you can set it to update and backup and sync over wifi only - I'm betting that those things are happening "automatically" using your cellular connection rather than wifi.
    I sync my email automatically when I have a wifi connection, but I can sync manually if I need to.  Downloads happen for me only on wifi, photo and video backup are only over wifi, app updates are only over wifi....check your settings.  Another recent gotcha is Facebook and videos.  LOTS of people are posting videos on Facebook and they automatically download and play UNLESS you turn them off.  That can eat up your data in a hurry if you are on FB regularly.

  • Schema version is lower than expected value

    While configuring the database at Step 3 of 9, it threw me an exception: INST-6177 OIM Schema version is lower than expected value.
    Create OIM 11g schema using repository creation utility and proceed with configuration.
    Now, Please help me...

    For the exception, the trace says that:
    [2011-05-19T14:29:10.511+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] MDS Schema Version is correct
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Exiting method executeHandler
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] Database is not encryped. This is not an upgrade flow.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Could not fetch the schema version from the database
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    ERROR ====>>>>:INST-6177
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    Cause:OIM Schema version is lower than the expected value
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    Action:Create OIM 11g schema using Repository Creation Utility and proceed with configuration.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] Retrieving default locale set in the machine.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Exiting method executeHandler
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Handler launch end: oimQueriesHandler.checkForUpgrade
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Handler returned status: FAILED
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Error in validating schema details

  • F-47 request should be approved by higher authority for lesser amt than ask

    F-47 request should be approved by higher authority for lesser amt than asked for only then down payment should be made.
    they want a list of down payment request which can be edited approved or disapproved by higher authorities The payment clerk should make payment only for the amt that is approved.
    can it be done in sap without credit management
    warm regards
    Manjunath

    Ah, I've just re-read my BT bill and it does indeed say BT Infinity Option 2 !
    I took that to mean BT Infinity 2, but maybe it means BT Infinity 1 Option 2 (which gives unlimited usage).
    Viewing my account online, it also says "Up to 76Mb download speed" and "Up to 19Mb upload speed", so I think I'm on Infinity 2 but with my IP profile still stuck at the same 38.72 that I was originally given.
    Anyway, to re-phrase my dilemma......
    Since I'm consistently hitting the IP Profile ceiling, I feel that my line MIGHT be capable of running faster if it were allowed. 
    What I don't want to do is apply for the no-cost "up to 76" option, and the discover that the line is not so stable.
    When I originally signed up, I was told to expect about 26 down speed, so BT obviously thought that my line would not be good enough for 38 even.
    I suppose I'm really asking if anyone has gone from 38 to 76, and regretted it because they then got a less stable line, variable speeds, or even consistently lower speeds (or maybe for any ther reasons).   
    Or, maybe I'm supposed to be getting a faster speed, but just have a "stuck Ip Profile" ?

  • Crop in Lighroom 5 works less logical than in Lr4

    Dear Support Team,
    I and my colleguaes had found that crop tool in Lightroom 5 works less logical than in previous versions. As we had found, if we use crop once setting preset aspect (for example 3:4) and than if we'll try to crop once again with another ascpect (for example 16:9) it will be croped in correct proportion but just within the frame of previous crop. So if we do 4 crops with different aspect one by one, we'll got a very tiny part of picture. In previous versions of Lightroom, every next crop takes the maximum area of the photo.

    This is a user to user forum - there is no support team here normally.
    As for the Crop changes, many of us welcomed it as it was a huge time suck in our workflow to have to crops reset to maximum when the aspect ratio changed modestly. I consider this a huge bug fix or feature if you prefer.

Maybe you are looking for

  • Sending oracle report automatically

    Hello, Can anybody help in figuring from oracle reports 10g to send automatically to pdf format report to a sepcific email address? if could please give a sample code to achieve thsi? Thanks a lot...

  • Navigation Link Width Problem

    Dear All, I am created a website wrap (960 px) & table is 960 px, I am set the navigation menu also 960 px. I don't know 100 px gap come the right side how can I reduce. Gap mention green color. Plz help weblink

  • Can  I write titles of comments on the picture image using iPhoto?

    I will like to be able to write titles or labels in the pictures stored on iPhoto files without the need to go to another photo software. Is it a way to write descriptive titles on the image using iPhoto? Thank you g5   Mac OS X (10.4.8)  

  • Solaris 11 : End of Support  Legacy Hardware

    Hi all, Could you please tell me if it's possible to install Solaris 11 on a V490. I know that i'll not get any support. I would like to use my old hardware to test Solaris 11. Thanks !

  • $1200 and it won't even run!!!

    I just purchased FCP Studio 2. Loaded it on my system last night and can't get any of the programs to run. I've repaired permissions, rebooted the system and installed all the updates. When FCP loads a dialog pops up with the message Final Cut Pro qu