Help me for retriving more records!

From a document on this website,I know that the default value for Statement's fetchsize is 10,I also prove it from my own JSP pages.
But a problem comes to me that when I call resultset.next() and the cursor exceeds 10 not reaching the end of the result set yet,an exception occurs.It says that "An unsupported syntax for refreshRow()".I never call this method and I think it must be called within some Oracle own method.
So,I have a question:if I have a result set with more than 10 records,how can I get the rest?

arullllk wrote:
how to make my container expand  because i am getting content from database some on is larger ans some one is shorter. and i want the code for div which height has not to go more than 20px; if the content is long and if content is short it has to come to 10px; and width has to be fixed say some 100px.
Please help me guys
If you want the height to expand then simply leave out setting the height of the DIV because it is not necessary to have this.
Alternatively (only if you must), you could set it to auto like this:
div {
    height: auto;

Similar Messages

  • Reading long text for more records at a time

    Hi all,
    We have a requirement for which that data like textid textname textobject  and language  must  be taken in to an internal table and for each record in the internal table i  have to read the long text inorder to compare the long text for the given search text.
    If i use Read_text inside the loop and endloop it works but it may not be appropriate in performance point of view.
    Is there any function module which can read long texts for more records at a time.
    The long text data in STXL will be in raw data format right? is there any way to convert raw data to normal so that by hitting the STXL i can read the long text data for more than one record at a time.
    Thanks in advance
    sanju.

    HI Sanju,
    Below is a code snippet which describes reading a long text frm the screen and appending it into the internal table.This code is actually to read the text from the screen and inserting a record into STXl and STXH.
    From your query what i understood is that you are storing the long text from the screen into a internal table and so you not want to use the read_text FM due to performance issue.
    Since tdline(tline table) is 132 char long format i use this small logic to read the screen data and append it to my internal table.
    *Data Declarations
      DATA: lv_strlen TYPE i,
            lv_create TYPE boolean,
            lv_desc TYPE string.
      DATA: ls_text TYPE tline,
            ls_basic_text TYPE stxh.
      DATA: lt_text TYPE ztty_tline_tab.
      CONSTANTS:
       lc_tdid TYPE  thead-tdid VALUE 'Z001',
       lc_tdobject TYPE thead-tdobject VALUE 'Z_ALERTS'.
    *Appending the text to the internal table.
      lv_strlen = STRLEN( iv_alert_text-alert_text ).
      lv_desc = iv_alert_text-alert_text.
      IF lv_strlen < 132.
        ls_text-tdformat = '*'.
        ls_text-tdline = lv_desc.
        APPEND ls_text TO lt_text.
      ELSE.
    *logic to wrap text
        DO.
          ls_text-tdformat = '*'.
          IF STRLEN( lv_desc ) < 132.
            ls_text-tdformat = '*'.
            ls_text-tdline = lv_desc.
            APPEND ls_text TO lt_text.
            EXIT.
          ENDIF.
          IF lv_desc+132(1) <> ' '.
            CONCATENATE lv_desc(131) '-' INTO ls_text-tdline.
            lv_desc = lv_desc+131.
          ELSE.
            ls_text-tdformat = '*'.
            ls_text-tdline = lv_desc(132).
            lv_desc = lv_desc+132.
          ENDIF.
          APPEND ls_text TO lt_text.
        ENDDO.
      ENDIF.
    Please award graciously if found helpful.Please do ask me if i have not answered you properly.
    Thank you.
    Message was edited by:
            P M Harish

  • How to display help text for Region in a page having more than one region

    Hi all,
    I want to display help text for each region in my page. I have 5 regions in my page.
    I have a big help text for each of my region so can any easy way is there to put the help text for each region more than 10 lines .I tried lot by searching forums but not able to implement. any document is there then please let me know.
    Thanks in advance,
    Amit

    i mean either if we click the region title name it should display some help text of near about 10 lines in a popup window.
    Or on moseover whenever we will put the cursor into the region title then it should displat the help text message of line 10.
    Hope you will get my qustion well.
    Thanks,
    Amit

  • I´m doing a design for presale, where I will need a router what support PAT for 500 or a little more of users, it not need any more features only static routing and dhcp pool for 500 users, can you help me for know what router recommend?

    I´m doing a design for presale, where  I will  need a router what support PAT for 500 or a little more of users, it  not need any more features only static routing and dhcp pool for 500 users, can you help me for know what router recommend?

    What is your WAN speed currently and projected WAN speed in the next 3 years?

  • Help in uccx (leave audio recording file for caller)

    Hi,
    on the attached snapshot for scripting "leave recording audio for caller" when the agent out of office, it's work successfully but when second call comes over in this case the same file is recorded, so how can i make it to record as
    recording1.wav
    recording2.wav
    recording3.wav
    Thanks 

    Hi
    You can use a 'set exception' step to set a ContactInactiveException trigger to go to a Label.
    Do this before the 'recording' step - if the caller then hangs up, it will go to the Label that should be placed so that the steps to upload the recording are completed.
    Currently the script will just stop if the caller hangs up.
    Aaron

  • Ideas or help needed for a simple, robust pluggable framework

    Hi all,
    Having written a fairly decent plugin engine, similar in concept to the Eclipse plugin engine, although at a more generic scale, I am looking for any possible ideas for a Java Swing framework that is built around the engine, with the concept of using a framework that is built on mostly plugins. My engine handles, or will soon handle, a number of features to make the engine robust enough, yet still easy enough, to use for just about any purpose.
    The engine is pretty simple, although with a bit more work I feel will be overall a pretty robust and powerful plugin engine. Each plugin is made up of one or more "services". A plugin is a .jar file that contains a plugin-conf.xml config file, the classes that implement the Service interface, and any supporting classes. The "plugin" is really the package of one or more services and supporting classes. The engine will handle the ability to work with expanded dir structures as well, so that the build process doesn't have to create .jar files on every build of a plugin. The engine has built in support to load, unload and reload a plugin at runtime. This helps during development by allowing auto-reload of a plugin service without having to restart the app. The engine has the ability to "watch" URLs in a separate thread (still working on this), and at given intervals if a change occurs to any plugin, that plugin is reloaded. This is configurable on a per plugin basis in the config file.
    Every plugin .jar file gets its own classloader instance. Because of the nature of a framework that may rely heavily on plugins, it will be very common to have plugin dependencies, where a plugin service may rely on one or more other plugin services. The dependencies are configured in the plugin-conf.xml file, and the engine resolves these when the plugin is loaded, automatically. Once all plugins have been loaded, an "init" call is made that then goes and resolves all plugin service dependencies, setting up the behind the scenes work to make sure any service can use any other service it defines to depend on. Another area is plugin versions. There will no doubt be a time when some sort of application may have legacy plugins, but also have newer plugins. For example, an application built on a "core" set of plugins, may eventually update the core plugins with newer versions. The engine allows the "old" plugins to exist and work while new versions of the same plugins may be loaded and working at the same time. This allows older plugins that depend on the old set of core plugins to work, while newer plugins that depend on the new core plugins may work also. Any plugin may depend on one or more services specified by specific versions, or a range of versions.
    Plugin services can define to be created when first loaded, or lazy instantiated. Ideally, an application would opt for lazy instantiation until a plugin is needed. For example, a number of plugins may need to add menu items or buttons that would trigger its service. The plugin does not actually need to be created until the menu or button is clicked on. There is one BIG problem with how this engine works though. Unlike the Eclipse (and other) engines where the config file defines the menu item(s), buttons, etc in an xml sort of language, this engine is built for generic use, and therefore is not specific to menu items or buttons triggering a service instantiation. Therefore, a little "hack" is required. A specific plugin that is created when first loaded will be required to set up all the menu items for specific plugins, then handle the actionPerformed() call to instruct the engine to create the service. The next step would be for the plugin service to add its own handler to the specific menu item it depends on, and remove the "old" handler the startup plugin added to it to handle the initial click. Another thought just struck me though. Because the engine must use an XML parser to load every plugin-conf.xml file, it might be possible to "extend" the parsing routine, where by an extending class could be added to the engine to parse plugin-conf.xml files. First the plugin engines own routine would parse it. Then, the extending class could parse for any extra plugin-conf.xml info, such as menu item settings, and directly set up the menu items and handlers in this manner. I will probably include this ability directly in the engine soon anyway, so that nobody else has to do this, but this is one area I would appreciate some feedback on.
    Anyway, so that is the jist of the engine. There is more to it under the hood, but that sums up a good part of it. Now, the pluggable framework, much like what the "shell" of eclipse, forte and so forth offer, is built around my engine to make it very easy to build Swing applications with a pluggable framework underneath. The idea is to package up a startup main class that is configurable, a number of useful plugins that other plugins could depend on, such as an Outlook layout, menuing, toolbars, drag/drop, history, undo/redo, macro record, open/save/search/find/replace dialogs, and so forth. This isn't just for an IDE though. The developer using the framework could deploy the basic app with the plugins of his/her choice, and add to it with his/her own plugins.
    Soooo, after this long post, what I am getting at is if anyone would be interested in helping out with ideas, feedback, testing, core framework plugins, and so forth. At this time I am keeping the code closed, but will probably public domain it, open source it, or whatever. The finished framework should make it easy for anyone to quickly build useable applications, and if all goes well, I'd like to set up a site with a location for 3rd party plugins to be uploaded, for download, comments, etc. Being a web developer, I myself will probably work on some plugins for Web Services, web stress testing, and so forth. I have lots of ideas for useable plugins.
    On that note, one application I am personally working on for my own use, is a simple yet possibly robust internet suite of apps. I want to incorporate FTP, Email, NewsGroup, and IRC/AOL IM/Yahoo IM/MSN IM/ICQ chat into a single app. Every aspect of it would be plugins. Frankly, I hate outlook, Eudora is alright, but I want to do some things with the email app. I also want a single IM/Chat app that can talk with all protocols (not an easy task, take a look at GAIM). Newsgroups are handy to work with for developers and others of interest, as is FTP. But even more so, being able to have all in one big application framework that allows them to share data between each other, work with one another, and so forth is appealing to me, and being written in Java it could potentially work on many platforms, giving some platforms a possible nice set of internet apps to use. Being able to send an email to a mailing list AND have it posted to specific newsgroups at the same time without having to copy/paste, open up separate applications and so forth has appeal. Directly emailing from any chat or newsgroup link without another app starting up is a little faster as well. Those are just "small" things that could prove to be very kewl in a complete internet app. Adding a web browser, well, I don't think I want to go that route. But if there is already a decent Java built web browser, it shouldn't be too hard to add it as a plugin.
    So, if anyone is interested, by all means, drop a post to this thread, let me know of interest, feedback, ideas, point out bad things, and so forth. I appreciate all forms of communication.
    Thanks.

    Yes I do. I am using it now with my work related project.
    I am in fact reworking the engine a bit now. I want to incorporate the notion of services (like OSGi) where by a plugin can register services. These services are "global" in scope, meaning any plugin may request the use of a service. However, services, unlike plugins, are not guaranteed to be available. Therefore, plugins using services must be coded to properly handle this possibility. As an example, imagine an email application using my engine. One plugin may provide the email gateway, including the javamail .jar library and provide the email service. Other plugins, such as the one that provides the functionality for the SEND button, would "use" this service. At runtime, when the send button was pressed it would ask the engine for the email service. If available, off goes the email. If not, it could pop up a dialog indicating some sort of message that the email service is not available.
    I am at the VERY beginning stages in this direction so I'd love to have ideas, thoughts, suggestions as to how this might be implemented. I do believe though that it will provide for a more powerful engine. The nice thing is, while the engine will support static runtime plugins, it will also support dynamic services that can come and go during the runtime. The key is that plugins using services do not maintain references to them, but instead query the engine each time a plugin needs to use a service.
    Static plugins are those that are guaranteed to be available or if not, any dependent plugin is not allowed to load. That is, if A depends on B and B is not able to be loaded, A is unloaded as well as it can't perform its job without B; it depends on B in some manner to complete its function. Imagine a plugin adding an option panel to the Preferences page only that the Preferences plugin is not loaded. It just can't work. However, with some work, there could be variations on this. That is, a plugin may provide a menu item as well as a preferences page. If the preference plugin is not available, then the plugin may simply still work via the menu item, but have no preferences panel available. This should be configurable via the plugin-conf.xml config file. However, as I have it now, using extension points and extensions like Eclipse does, it is also possible that if the Preferences plugin isn't loaded, it wont look for ANY extensions extending its extensino point, and therefore the plugins could all still run but there would simply be no preferences page. So, I am not entirely sure yet which way is best for this to work.
    My engine, as it stands now, allows for separate classloader plugin loading, it automatically resolves all dependencies by creating the plugin registry each time the engine is started up. To speed up plugin loading, it maintains a plugins.xml file in the root dir that keeps track of each plugin that was loaded and its last timestamp. Plugins can be open directory files or jarred up into .PAR files (think .WAR or .EAR files). The engine can find .par or open-dir plugins in multiple locations (including URL locations for direct .par files). When it finds a .par file, it first decompresses the .par file to a plugin work directory. Every plugin must have a plugin-conf.xml in its root dir, and either a /classes dir where compiled classes are, or a .jar file in the root path of the plugin, where the /classes dir superscedes the .jar file. Alternatively, anything in a /lib dir is automatically picked up as part of the plugin classpath. So a plugin that wraps the xerces.jar file can simply place the xerces.jar in the /lib dir and automatically present the xerces library to all dependent plugins (which can import the xerces classes but not need to distribute the xerces.jar file if a plugin they depend on has it in its /lib dir). The "parent lookup" process goes only one parent level deep. That is, if plugin A depends on a class in a /lib/*.jar file in plugin B, then the engine will resolve the class (through delegation) of plugin B. But if A depends on B, B depends on C where plugin C's /lib/*.jar file contains a class A is looking to use, this will not work and A will throw a ClassNotFoundException. In other words, the parent lookup only goes as far as the classpath of all dependent plugins, not up the chain of all dependent plugins. Eclipse allows each plugin to "export" various classes, or packages, or entire .jar files and the lookup can go all the way up the chain if need be. I haven't yet found a big reason for supporting this, so I am not too concerned with that at this point. The engine does support reloadable plugins although I have not yet implemented it. Because each plugin information object is stored in a Map keyed on the plugins GUID (found in the plugin-conf.xml file), it is easy enough to load a new plugin (since they get their own classloader) and replace the object at the GUID key and now have a reloaded plugin. The harder part is properly notifying all dependent plugins of the reload and what to do with them. Therefore I have not quite yet implemented this feature although the first step can easily be done, so long as nobody minds the "remnants" of older plugins laying around and possibly not being garbage collected.
    All of this works now, and I am using it. I do NOT have a generic UI framework just yet. I am working on that now. Eclipse has a very nice feature in that every plugin.xml file builds up the UI without any plugin code ever being created or ran. I am working on something like that now, although I am focussed more on the aspect of the engine at this point.
    Two things keep me going. First, the shear fun of working on this and seeing it succeed, even if a little bit. Second, while I love the idea of Eclipse, OSGi and other engines, so far I have yet to find one that is very easy to write plugins for, is very small, and is "generic" enough for any use. Some may argue JBoss core, at 29K can do this. I don't know if it can. It is built around JMX and I don't know that I agree JMX is the "ultimate" core plugin engine for all types of apps. Not that mine is either, but I'd like to see what I am working on become that if possible. Currently, with an xml parser (www.xmlpull.org) added as part of the code, my engine is about 40K with debug info, maybe about 28K without. I expect it to grow a bit more with services, reloadable/unloadable code, and some other stuff. However, I am thinking it will still be around 50K in size and in my opinion, with an xml read/write parser (very fast one at that), extension/extensino points, services, dependencies, multiple versions of plugins (soon), load/unload/reload capabilities, .par management (unjar into work dir, download .par files from urls, etc) and open directory capabilities, inidividual classloaders, automatic dependency resolution, dynamic dependency resolution and possibly even more, I think what my engine offers (and will offer) is pretty cool in my book.
    None the less, there is always room for improvement. One of the things I pride myself on is using as little code and keeping the code neat and easily readable, not to mention as non-archaic as possible, makes for an easily maintainable project.
    So, having said all that, YES, the engine can be used as is right now. It does not reload plugins, but you can dynamically load plugins, handle dependency resolution, have a very fast xml read/write parser at your disposal for any plugin, and for the most part easily write plugins. That is all possible now. I should put the engine I have now up on my generic-plugin-engine sourceforge project one of these days, perhaps soon I will do that! While I have no problem handing out the code, I am currently the only committer and I don't have it loaded into CVS at this point. I would like to do so very soon.
    So, if you are interested, by all means, let me know and I'll be happy to send you what I have, and love to have more help on the next version of this.

  • Recommended lowest Mac Specifications for 32-Channel Recording.

    I've been looking a lot into using Logic for Multi-Track recording our bands in a Live setting.
    I mix on a Yamaha M7CL and just purchased two MY16-AT Expansion cards that each give us 16 Channels of Digital Direct Out output, 32 Channels output in total. To go with that, I picked up a Presonus Firestudio Lightpipe 32-Channel Interface. For the time being, I only have Logic Express 8 so I will be using that on my Macbook Pro recording 16 Channels.
    My question is, what is recommended for best performance on a desktop machine when recording 32 Channels once I upgrade to Logic Studio? I'm shooting to get a new iMac, but want to be sure that I am getting a machine that wont struggle too much when I'm editing the tracks. For example, the 20" (Current Generation) iMac, 2.66ghz and 4gb ram. I'm curious how that would handle it, especially on a bit of a budget.
    Any ideas or suggestions are helpful, thanks.

    I've never used my Mac to record live, but I have had more than 66 mono tracks running at once in LE7 on my Macbook from an external drive without a hitch. For 32 tracks the specs you listed should be plenty of machine to handle it, IMO.

  • Problems with retrieving data from tables with 240 and more records

    Hi,
    I've been connecting to Oracle 11g Server (not sure exact version) using Oracle 10.1.0 Client and O10 Oracle 10g driver. Everything was ok.
    I installed Oracle 11.2.0 Client and I started to have problems with retrieving data from tables.
    First I used the same connection string, driver and so on (O10 Oracle 10g) then I tried ORA Oracle but with no luck. The result is like this:
    I'm able to connect to database. I'm able to retrieve data but from small tables (e.g. with 110 records it works perfectly using both O10 and ORA drivers). When I try to retrieve data from tables with like 240 and more records retrieval simply hangs (nothing happens at all - no error, no timeout). Application seems to hang forever.
    I'm using Powerbuilder to connect to Database (either PB10.5 using O10 driver or PB12 using ORA driver). I used DBTrace, so I see that query hangs on the first FETCH.
    So for the retrievals that hang I have something like:
    (3260008): BIND SELECT OUTPUT BUFFER (DataWindow):(DBI_SELBIND) (0.186 MS / 18978.709 MS)
    (3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=0
    (3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=1
    (3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=0
    (3260008): EXECUTE:(DBI_DW_EXECUTE) (192.982 MS / 19171.691 MS)
    (3260008): FETCH NEXT:(DBI_FETCHNEXT)
    and this is the last line,
    while for retrievals that end, I have FETCH producing time, data in buffer and moving to the next Fetch until all data is retrieved
    On the side note, I have no problems with retrieving data either by SQL Developer or DbVisualizer.
    Problems started when I installed 11.2.0 Client. Even if I want to use 10.0.1 Client, the same problem occurs. So I guess something from 11.2.0 overrides 10.0.1 settings.
    I will appreciate any comments/hints/help.
    Thank you very much.

    pgoel wrote:
    I've been connecting to Oracle 11g Server (not sure exact version) using Oracle 10.1.0 Client and O10 Oracle 10g driver. Everything was ok.Earlier (before installing new stuff) did you ever try retrieving data from big tables (like 240 and more records), if yes, was it working?Yes, with Oracle 10g client (before installing 11g) I was able to retrieve any data, either it was 10k+ records or 100 records. Installing 11g client changed something that even using old 10g client (which I still have installed) fails to work. The same problem occur no matter I'm using 10g or 11g client now. Powerbuilder hangs on retrieving tables with more than like 240 records.
    Thanks.

  • How to generate the batch number for group of records

    Hi All,
    requirement: i have created one sequence to generate the batch numbers.i have to get the nextval of the sequence for batch number (this column is not part of the table. its dynamic column/alias column) only for every 50000 records of a table.
    detail description:
    for example, if i have 10 records in a table. And, if i want to generate the batch number for every 2 records, then the output should look like belo.
    batch-number qty
    1 10
    1 20
    2 30
    2 40
    3 50
    3 60
    4 70
    4 80
    5 90
    5 100
    Please help me on this. Please let me know if you need more info.

    one more update on the above request. this has to be done using SQL query only.

  • Report for freight condition records

    Dear Freinds
    In my transportation module project, requirement is report for freight condition records.
    We have been created two tables in one access sequence, two tables are: 1. route/ship-to-party 2. route
    They maintained more condition records. They want to see all vondition records in one single report.
    Report based on those two tables. But those two tables are storing in KOMG Structure.
    But price is storing in KONP table. How to fetch both in to repor.
    With regards
    Lakki

    Let us say the condition tables are 001 and 002
    Now you need to go to database tables A001 and A002 in SE16
    Give the name of your condition type for which you want to get all the condition records
    Execute now.
    Now you will get all the condition records for the condition type created for the table say A001
    Hope this solves your problem. You can reward if this helps.

  • About "the hospital for the error records"...

    Hi folks,
    I am new to ODI, and I am not buying the idea of "the hospital for the error records". In my personal opinion it introduces many complications. If some dimension records are "dirty" and we put them in the error table, then we are splitting the natural relation of the data, aren't we. The DWH data won't correspond to the source anymore, the dimension's data will be artificially split (the error records could apear on the next day), and we will need an additional process which should update the corresponding fact records on the next day (late arriving changes for the existing facts). Sounds much more complicated, than just to allow the ETL process to fail, fix the issue and start it again.
    Experts, please, help with an opinion.
    Thanks in advance!

    @DecaXD, thanks for your comment.
    "after that under my opinion "something is better than none"" This is exactly what I am worried about. The whole "hospital" idea, is something, which makes the whole process more complicated, without clear benefits. I like the idea to have E$ tables, but just for debugging purposes, just to be able to identify quickly, what were the error rows, so I can do a investigation based on failed data. What, I don't like is the idea that we can EASILY reuse the error data to recover the DWH state later.
    "Are you sure that you want to stop ALL your datawarehouse because the Address field is null?"Well, if during the analysis phase (especially, the data profiling step), we missed the fact that the address table could be null, then yes, I would like to stop the whole ETL, because this is situation, which has not been addressed correctly during the design of the ETL process.
    A rithorical question: Are you going to keep recovering the null records after each run (even a "successfull" one), or you would fix the interface/mapping to allow nulls, which would be updated on subsequent runs naturally ?
    What about more complicated error rows, like missing dimension key, which relates to many fact records ? Should we put the related fact records in their E$ table either. What if we have a snowflake design with many references to/from the error dimension records. Sounds too complicated for me.
    I would just store the error rows, and stop the ETL. Note, that I allow for dellay, but don't publish any not consistent data in the DWH.
    Bellow is a quote from the book you mentioned:
    "Don’t overuse the reject file! Reject files are notorious for being dumping
    grounds for data we’ll deal with later. When records wind up in the reject file,
    unless they are processed before the next major load step is allowed to run to
    completion, the data warehouse and the production system are out of sync."
    I am looking for more arguments, I do understand that it is not a simple issue, but I would like to see your real life experience here..
    Thanks!
    Edited by: hayrabedian on 2013-4-29 14:01
    Edited by: hayrabedian on 2013-4-29 14:02

  • Help with Audio Playback While Recording

    I am an aspiring sound engineer and I am having a problem with recording in Audition 3.0 that I need help with. When i record vocals, the sound wave plays back in the speakers/headphones with like a 1.5 second delay after it is recorded. It really throws off the artists I am working with when they are trying to record vocals. How do i kill headphone/speaker playback while someone is recording?

    There are problems with a lot of internal soundcards in a lot of laptops, and these seem to be so many and various that all you can do is use trial and error and hope for the best.
    External soundcards should cure any problem, but you may have to look into what else is running on the laptop, as so much is integrated, and some machines seem to have audio problems caused by wireless or other network adaptors etc.
    As I said, I use oldish Edirol and Tascam usb units. The Tascam has a known fault (noise when using a guitar channel at the same time as phantom powering a mic on the other channel) which I can live with, and I would hope has been solved in later and current versions. The Edirol is fine. I use 2 units to give me the connections I want (ie one has jacks, the other phono plugs, and different size headphone sockets). Both these have asio and wdm drivers and work fine with AA 3.0 here. More modern devices will be usb2 and will presumably provide more channels or higher sampling rates, but these are fine. Others will be more up to date than me, but I would think anything from Tascam, Edirol, Echo Audio and I think most m-audio models should be fine. Just be wary of anything from Creative Labs. I, too, have had bad experiences with one of their products (it drew too much current from the usb port on a couple of the laptops so I never got as far as to find whether it had asio drivers working or not). But others will be more up to date than I am.
    I just went onto ebay over the years and kept bidding on sensible looking devices until I got ones at the right price, but it probably is better to buy new if you can afford it.
    Sorry I can't give a more definite answer.

  • Help is there anyway to record video ??

    i have a 20" 2.1ghz g5 imac with built in isight and i guess ilife05...i havent bought any other software just what came with my computer...i have tried to record video using my built in isight but i cant find anything that works even in imovie....i dont want to purchase anything, i dont understand why i would have to...any help would be appreciated.

    Hi makeyurself,
    Unfortunately the built in iSight won't normally work with the version of iMovie which came with iLife05. To use it with iMovie you have to update to iLife06.
    There is a workaround, but it assumes that you have access to a FW DV movie camera.:
    1) Fire up iMovie and connect a DV camcorder,
    2)go to import mode in iMovie,
    3) disconnect the camcorder
    Bizzarely you should now be able to shoot using your built in iSight.
    I suspect most people who want to do this, though, don't have ready access to a seperate DV camcorder!
    Have a look at "EZ Jim's" page on Ralph Johns website at http://www.ralphjohnsuk.dsl.pipex.com/EZJim/EZJimpage3.html for some more suggestions.
    Cheers
    Rod

  • Query help needed for querybuilder to use with lcm cli

    Hi,
    I had set up several queries to run with the lcm cli in order to back up personal folders, inboxes, etc. to lcmbiar files to use as backups.  I have seen a few posts that are similar, but I have a specific question/concern.
    I just recently had to reference one of these back ups only to find it was incomplete.  Does the query used by the lcm cli also only pull the first 1000 rows? Is there a way to change this limit somwhere?
    Also, since when importing this lcmbiar file for something 'generic' like 'all personal folders', pulls in WAY too much stuff, is there a better way to limit this? I am open to suggestions, but it would almost be better if I could create individual lcmbiar output files on a per user basis.  This way, when/if I need to restore someone's personal folder contents, for example, I could find them by username and import just that lcmbiar file, as opposed to all 3000 of our users.  I am not quite sure how to accomplish this...
    Currently, with my limited windows scripting knowledge, I have set up a bat script to run each morning, that creates a 'runtime' properties file from a template, such that the lcmbiar file gets named uniquely for that day and its content.  Then I call the lcm_cli using the proper command.  The query within the properties file is currently very straightforward - select * from CI_INFOOBJECTS WHERE SI_ANCESTOR = 18.
    To do what I want to do...
    1) I'd first need a current list of usernames in a text file, that could be read (?) in and parsed to single out each user (remember we are talking about 3000) - not sure the best way to get this.
    2) Then instead of just updating the the lcmbiar file name with a unique name as I do currently, I would also update the query (which would be different altogether):  SELECT * from CI_INFOOBJECTS where SI_OWNER = '<username>' AND SI_ANCESTOR = 18.
    In theory, that would grab everything owned by that user in their personal folder - right? and write it to its own lcmbiar file to a location I specify.
    I just think chunking something like this is more effective and BO has no built in back up capability that already does this.  We are on BO 4.0 SP7 right now, move to 4.1 SP4 over the summer.
    Any thoughts on this would be much appreciated.
    thanks,
    Missy

    Just wanted to pass along that SAP Support pointed me to KBA 1969259 which had some good example queries in it (they were helping me with a concern I had over the lcmbiar file output, not with query design).  I was able to tweak one of the sample queries in this KBA to give me more of what I was after...
    SELECT TOP 10000 static, relationships, SI_PARENT_FOLDER_CUID, SI_OWNER, SI_PATH FROM CI_INFOOBJECTS,CI_APPOBJECTS,CI_SYSTEMOBJECTS WHERE (DESCENDENTS ("si_name='Folder Hierarchy'","si_name='<username>'"))
    This exports inboxes, personal folders, categories, and roles, which is more than I was after, but still necessary to back up.. so in a way, it is actually better because I have one lcmbiar file per user - contains all their 'personal' objects.
    So between narrowing down my set of users to only those who actually have saved things to their personal folder and now having a query that actually returns what I expect it to return, along with the help below for a job to clean up these excessive amounts of promotion jobs I am now creating... I am all set!
    Hopefully this can help someone else too!
    Thanks,
    missy

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

Maybe you are looking for