Is Workspace Manager the right solution for history data only

I've been experimenting some with WM to just enable history on all tables in a application we have with about 100 tables.
Everything works fine, except for the fact that we have lots of fk_constraints that require enabling of versioning in order.
We know that changes to the Db layout will occur as the app develops with business requirement's.
My questions is : Is WM the right solution here, as I don't see anyway to "transport" historical data in the event of a migration of old DB layout to a new layout?
(db refactoring).
/christian

Hi Christian,
When you execute enableversioning on the tables, you can specify the complete set of tables that you want to version, instead of having to maintain the correct table order.
There shouldn't be a need to transport the data between tables. We support DDL changes on versioned tables by using the beginDDL/commitDDL procedures. If you do require the transport of data between databases, then note that we only support a full database import/export. The user guide covers both of these topics in greater detail.
Regards,
Ben

Similar Messages

  • Is DPS the right solution for me?

    I am currently working on a project that includes an iPad app that will direct users to one of 30 brochures. Other developers are working on the iPad app, I am creating the brochures.
    If i use the DPS to publish the brochures, do I need to create 30 custom viewer apps (at $495 a month/$6,000 a year), or is there a more cost-effective way to deliver the brochures? The brochures will be created in Indesign, and I would like them to have some interactive features (image slide-shows, pinch & zoom, etc.) but they have to be viewable on an iPad.

    Well, a DPS professional license would allow you to publish all 30 of those brochures in one app. But that would mean that all your brochures are available to anyone who downloads the app, so depending on your specific needs that could cause a problem.
    I love doing brochures through DPS, but if you already have developers working on the front and back ends of an app, I don't know if you want to replace their work with DPS.
    But long story short, with the limited info you have provided, it sounds like DPS could possibly be the right solution for you.

  • Is JavaFX the right solution for this scenario...

    Hi,
    Is JavaFX the right choice for the following implementation choice I have to make? (see below for the requirement)
    Requirements:
    1. Provide a way to visualise within a web application an entity relationship type diagram (i.e. nodes with relationships between them). The backend database will hold the topology relationship. So to visualise this on a web application will need the ability to draw rectangles/lines/text etc.
    2. Provide a way to allow the use to trigger "add a new node", or "edit info in this node". For example a right hand context sensitive menu for example.
    3. Ideally will scale as the user resizes the browser window
    4. Would like the main functionality of the application to remain web based (is a Ruby on Rails application in fact), but have the visualization of the diagram render within the web application as transparently as possible.
    Options / Issues:
    * Issues I've struck with some investigation I've done is that whilst the <canvas> tag looks good for Mozilla/Firefox etc, it does not seem to have support on InternetExplorer. Hence cross-browser compatibility seems to be a real issue for the JavaScript type solutions from what I can see. This is why I thought JavaFX may be good?
    * Options therefore seem to me to be:
    - javascript (e.g. <canvas> tag) => cross-platform issue
    - JavaFX / Applet => (this is what I'm asking about)
    - Microsoft => costs $$ for development environment etc
    - AIR / Flex / Flex => ??? costs $$ again I think
    Regards
    Greg

    thanks - I'm still a little confused re their products and which would fit best so I've sent them some questions (below if you're interested)
    Hello,
    Could you please assist me in understanding which of your products would satisfy my needs. In fact (a) whether JGraph itself would and if not, or if it's not ideal, (b) which other product would.
    REQUIREMENTS:
    1. Provide a way to visualise within a web application a connectivity type diagram (i.e. nodes with relationships between them, a network connectively type of diagram).
    2. The server side (i.e. web application with database) will hold the topology relationship. HTTP type interfaces off the web application can be developed to provide the client side visualizing component with the topology data in the required format (assume this is XML)
    3. As well as just visualizing in the browser there would need to be a way for user to trigger a context sensitive "add a new node", or "edit info in this node". For example a right hand context sensitive menu for example.
    4. Ideally the diagram will scale as the user resizes the browser window
    5. Would like the main functionality of the application to remain web based , but have the visualization of the diagram render within the web application as transparently as possible. The the visualizing component would just take topology data and intelligently display this.
    6. DESIRABLE: Basic automated layout would be nice, or as a desirable (depending on cost) more sophisticated auto-mated layout.
    QUESTIONS:
    As well as your recommendation re which product would suite I had some specific questions which I would appreciate clarification on:
    Q1 - I assume if I have a web backend that can deliver topology inforrmation in an appropriate XML format via a HTTP REST type GET call that this could be used as a the source of data for a jGraph visualisation running within an Applet?
    Q2 - If running within an Applet, can jGraph cater for a right hand menu option off the nodes/links on the graph, that I could use to trigger other calls back to the backend? (e.g. to trigger an Add New Node call)
    Q3 - Following on from Q2 scenario, if I trigger an add new node scenario, if I wanted to visualise the form to type in the attributes for the new node, could this be handled within the applet by jGraph, or would this be a case of just adding your own Swing based dialogs to handle this?
    Q4 - Do the basic JGraph do any basic layout without having to go up to the layout Pro package (which I think costs if using it commercially).
    Q5 - If the answer to Q4 is No, how difficult would it be using the base JGraph library to do a basic layout? Is this doable/recommended? i.e. how would one "layout" the diagram if using only the base JGraph component? (noting from my requirements I'm really after a component I could send my topology information to in XML form and have it just visualise it for me)
    Q6 - Running the visualiation in an Applet in a browser, is the typical usage one where all changes to topology are made as calls to backend? i.e. or is there an approach where one would allow users to make changes to the topology within the applet and build up all the changes here on the client, and then at some point synch these back to the backend? (I'm assuming the keep it simple approach would be not to do this)
    Q7 - Is there a sample application/project with source code that implements a JGraph in applet/browser talking to web backend for data?
    Q8 - How does JGraphPro & mXGraph fit into the picture re solving my requirements in the most cost effective manner

  • Deciding whether TestStand is the right solution for my company

    I'm working at a growing company and we recently started a push in our Test group to try and standardize our future development as much as possible to allow as much re-use of code as possible.  I don't want to say exactly what it is we make for confidentiality reasons, but let's just say that 90% of what we make is the same basic type of product, but it comes in literally hundreds of models ranging in size from about the size of a dime to the size of a small washing machine.  We're considering switching to TestStand on all new stations and on all updates of previous stations to assist in rapid development with increased efficiency.
    Over the years we've made about 20 or so test stations as new versions of our product came out, each able to fit products of a different size and with different test requirements, but because our main products are all so similar to each other, there's only a pool of about 6 different types of tests we need to run, with some products needing only one of those tests and others needing all 6, and plenty in the 2-5 range.  The size differences among different products also mean we have a large amount of different power supplies, and for various other reasons the measurement devices for the test stations aren't terribly standardized as well.
    Step 1 was that we standardized the database we used to store all of the test data.  We now have one database that can store data from all of our test stations in the same way.  A few of the old stations had to have some creative workarounds done (i.e. instead of rewriting their current test program, write something that takes their old output format and converts it to the new database format, then upload it), but this step is 100% complete.
    Step 2 was to abstract out the most common pieces of hardware, so we have started basically putting the IVI types in a LVOOP wrapper.   Now instead of being limited to interchangability among IVI devices we can also swap in any of the other assorted devices we've accumulated over the years, many of which don't have IVI drivers and even a few that don't follow SCPI standards, as this is a great use of inheritance and overrides.  We're also implementing a few device types that don't have IVI classes.  This effort is already well underway and we're OK here.
    Step 3 is where we're at now.  As we standardized on hardware interfacing, we also figured it would be a good idea to attempt to effectively write each of the 6 or so tests one last time each with very flexible setup parameters as inputs, using the abstracted hardware classes so that it wouldn't matter which hardware we had, so that all the tests would run the same way.  So we started looking at solutions to some form of sequence management, and we came up with a couple of possibilities for homegrown versions but after debating those for a while we started to ask ourselves, "Are we just re-inventing TestStand here?"
    We use TestStand on a few stations here already so we had licences for development, but the stations we use it on at the moment fall into the 10% of outliers that we have and are somewhat non-standard.  So none of the 6-ish usual tests were used on them and they're for a very different product type, so we never tried to use them with our standardized database.  They also were all made before I came on board here, so I don't have much experience with TestStand myself, and I've run into some roadblocks when trying to come up with a framework for how to integrate our standard database and our standard instrument LVOOP classes into a TestStand environment that meets our current and future test requirements.
    The first roadblock is that our database standardization and TestStand's built-in reporting don't mesh at all.  Right now, all of our test stations use the same "mega-cluster" to store their data in, which is a type-def cluster of all the things our products could ever be tested for on any of our stations.  We pass it to a common database write function that writes everything that has changed from the default values (NaN and blank strings\arrays) and ignores the rest, so we don't fill the database with zeroes for the tests we don't do.  I do want to emphasize how big this cluster got in order to have all of the potential save values we might ever need in it.  It is a cluster of 13 other clusters, the largest of which has 10 items in it, and one of those is a 1-D array of another cluster with 19 elements in it, and one of those elements is another 1-d array with a cluster of 24 units in it.  If all of the controls in the mega-cluster are shown on the front panel at once, I can't fit them on 2 full-size 1080p screens at the same time.  The context help for the main cluster is 5 screen heights tall just for the listing of all of the elements.  
    So I really can't see this being integrated with the built-in TestStand reporting system.  Additionally, the "Type" system in TestStand appears to be unhelpful when attempting to pass this cluster between steps, and even worse when trying to pass it between sequences.  When I add the mega-cluster to it, it also creates over 30 other types, one for each of the other clusters contained inside of it.  Within LabVIEW alone this is pretty easy to manage, as we just have a centralized repository of typedefs in source control that gets checked out to a common directory on each computer, as we do minor changes to acommodate new products a few times a year.  However, on TestStand, I can't find a method that's the equivalent for just storing a bunch of .ctl files in the same shared directory like we do with LabVIEW.  I see that they can be stored in certain INI files or in sequence files.  The INI files appear to all be station-specific, and since all of our 20ish test stations have different hardware and capabilities, we can't just synchronize those files.  And if I want to pass the mega-cluster type between sequence files that call sub-sequence files, every one of those files will need to be manually updated to get their types to match... and we'lll be needing at least 200 sequence files when we're done, at a guess.  Unless there's some method I'm missing here, I don't see how to use TestStand to move this data around.  We could use something like just passing it by reference, but then if we do that I don't actually see much of a benefit here of using TestStand instead of just pure LabVIEW as two of the things that TestStand is supposed to handle (reporting steps as pass/fail against limits and reporting and logging the results afterwards).  Plus then we have the additional burder of making sure the module holding the reference in LabVIEW doesn't unload before we're done using it.
    Is there some better way of handling massively complex types in TestStand that I am not seeing?
    There may be more that I'm not seeing yet, but if there isn't a way to handle these massively complex types in TestStand, all we'd be getting out of it is a slightly easier method of managing sequencing than we have now, which I don't see as being worth the investment in time and licenses it would take.
    I'd be grateful for any feedback.  Thanks!

    Considering you already seem to be leveraging LabVIEW heavily and before jumping into TestStand I would look at the PTP Sequencer. This seems to be a cutdown version of TesStand but inside LabVIEW and is a simple install from the LabVIEW Tools network using VIPM.
    If you are happy to look at alternative representations then maybe check out NI's StateChart Module or the Open StateChart by Vogel (available as an install from VIPM).
    The advantages of any of the above is they will be alot cheaper than TestStand and the required run time licences if you don't require all the power of TestStand (extensible everything).

  • Is Structured FM12 the right solution for this problem?

    I've been tasked with solving a tricky problem at my office. Briefly: We've got a technical manual that needs to be printed each time we sell a piece of equipment. Currently, the manual is produced using a combination of MS Access DB and a convoluted Word doc that uses Mail Merge to pull data from the Access DB into the appropriate fields. The DB has a hundred tables or so, and some of the tables are "calculated" values - i.e. the value in the fields is calculated based on values entered in other tables within the DB. The Word doc uses some kind of "if-then-else" logic to determine which tables to use for which fields when building the doc. This whole setup is currently maintained by the engineering, sales, and marketing departments.
    We currently use FM11 (unstructured) in the Technical Writing department, and my boss is asking me to figure out a way to migrate the Access/Word doc described above to a format we can use so we can take ownership of the documentation process for this particular line of equipment. I suspect the variables involved here go way beyond what we can do with conditional text and variables within FM, but I'm wondering if Structured FM (either 11 or 12) is more suited to this project, either by using some sort of conduit between FM and an SQL DB, or directly within FM using conditinal text, variables, or some other organizational function present in either FM11 or FM12.
    Any guidance here would be appreciated. I'm not even sure what questions to ask at this point to get a proper shove in the right direction.

    I love this line: Get that SQL queries into XML directly or into CSV and transform the CSV into XML via XSLT. Reminds me of that bit in "Good Morning, Vietnam" where Robin Williams goes through the routine about the Vice President: "Well, sir, since the VP is such a VIP, shouldn't we keep his ETA on the QT? Otherwise, he might get KIA and then we'd all be on KP." And now, back to work...
    I'm going to try to answer all of the questions above, in order, and add a little info where appropriate.
    TW dept may or may not take over all maintenance of the doc. That's one of the recommendations I'm tasked with providing. My current thinking is, engineering should remain in charge of entering relevant tool data to the data repository, sales should provide TW with a "spec sheet" of what was sold, and TW should then use the specs to "build" the document for customer release.
    Will a DB still be used? Unknown. That seems the best way to catalog and access the data, but I'm open to alternatives if there are any that make sense.
    I am totally unfamiliar with structure, XML, DITA, HTML, etc. Literally 100% of my experience with FM has been in the unstructured workspace. This whole structured FM inquiry was inspired by a blurb in my "Unstructured FrameMaker 11" book (from p474: "If you need to use complex combinations of condition tags, consider setting up a workflow in the structured interface of FrameMaker. Structured documents allow you to create versions based on attributes, which are easier to use and more powerful than expressions.") A quick Google of this blurb didn't turn up much useful information, but this seems to jive with Lynne's input above re: attributes.
    Data is not currently in SQL - it's in Access DB. We can migrate to SQL if necessary - that's one of the answers I'm supposed to be providing.
    The reason this is all currently being done in Word is because that's what the Sales & Engineering departments understand, and currently they're responsible for these docs. I'm sure this started out as a simple, nerdy solution to a small problem - some engineer didn't want to maintain two separate Word docs when he could create a database and then use mail merge to automagically build the same thing. Since then, it's grown to hundreds of tables and thousands of possible permutations.
    We already have FrameMaker installations in the department. If we can do this with FM11, great, but if not, no big deal - boss has already said he wants to move to FM12 as soon as possible. In other words, purchasing software - unless it's something additional needed to make all this work - isn't really a budgetary consideration.
    As mentioned, I have no skills with using FM for any kind of structured project, but I'm willing to learn if it seems like a good solution. Just need to figure out how to proceed.
    Thanks for your input - looking forward to hearing what else you folks have to say.

  • Is the Snow Leopard Mac Mini Server the right solution for my office?

    I'm the de facto "sysadmin" for my small office, which usually just means I set up the wireless, configure network printing, troubleshoot little issues with Mail and MS Office products.
    Currently, we have 4 employees all on iMacs. We share files through a slapped-together setup, where there is a public folder on our owner's iMac and we all share files there. There are a few problems with this:
    - If the owner's computer is off, no-one can get to the shared files.
    - The owner's computer has had some strange "permissions" issues so sometimes files in the "Public" shared folder end up being read-only, or "read & write" for "nobody".
    - A 5th employee telecommutes on an iMac, and can't access the shared folder or files.
    So, we're considering getting a Mac Mini Server to do file storage and sharing, both locally and with telecommuting employees (of which there may be more in the future).
    - Is this the best solution to our needs - really just file sharing, no web hosting or anything like that?
    - What level of access control / authentication can we do on the Server? For example, could we have a password protected folder on the server to restrict access?
    - Would we need to upgrade our standard DSL service if we want to share files on the server with folks not on the local network?
    - Am I biting off more than I can chew here, given that my technical knowledge is slim but I am the most computer-literate of anyone in the office, so I will need to trouble-shoot any issues that come up with the server?

    For your stated goal, network-attached storage (NAS) or an always-on Mac client would be a simpler solution. Either preferably with RAID, and with provisions and storage for periodic archives.
    A Mac OS X Server box is overkill. The Mac client boxes have 10-client sharing.
    If you want single-signon and shared directory services and mail and web and various of the other pieces and services that are available within, then you can grow into a Mac OS X Server box.
    A server is rather more to manage, regardless of what you choose. You're getting DNS and networking and other core pieces, minimally, and you're also responsible for many of the configuration settings and services and details that a client box receives from a server box. And you're definitely dealing with protections and such across multiple boxes.
    For some other perspectives, there are various previous discussions of this posted around the forums. A search that includes NAS should kick over a few of these; this is a typical low-end alternative to running a server.

  • Is Oracle Text the right solution for this need of a specific search!

    Hi ,
    We are on Oracle 11.2.0.2 on Solaris 10. We have the need to be able to do search on data that are having diacritical marks and we should be able to do the serach ignoring this diacritical marks. That is the requirement. Now I got to hear that Oracle Text has a preference called BASIC_LEXER which can bypass the diacritical marks and so solely due to this feature I implemented Oracle Text and just for this diacritical search and no other need.
    I mean I set up preference like this:
      ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
      ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    With this I set up like this:
    CREATE TABLE TEXT_TEST
      NAME  VARCHAR2(255 BYTE)
    --created Oracle Text index
    CREATE INDEX TEXT_TEST_IDX1 ON TEXT_TEST
    (NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    --sample data to illustrate the problem
    Insert into TEXT_TEST
       (NAME)
    Values
       ('muller');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('müller');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('MULLER');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('MÜLLER');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('PAUL HERNANDEZ');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('CHRISTOPHER Phil');
    COMMIT;
    --Now there is an alternative solution that is there,  instead of thee Oracle Text which is just a plain function given below (and it seems to work neat for my simple need of removing diacritical characters effect in search)
    --I need to evaluate which is better given my specific needs -the function below or Oracle Text.
    CREATE OR REPLACE FUNCTION remove_dia(p_value IN VARCHAR2, p_doUpper IN VARCHAR2 := 'Y')
    RETURN VARCHAR2 DETERMINISTIC
    IS
    OUTPUT_STR VARCHAR2(4000);
    begin
    IF (p_doUpper = 'Y') THEN
       OUTPUT_STR := UPPER(p_value);
    ELSE
       OUTPUT_STR := p_value;
    END IF;
    OUTPUT_STR := TRANSLATE(OUTPUT_STR,'ÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝàáâãäåçèéêëìíîïñòóôõöøùúûüýÿ', 'AAAAAACEEEEIIIINOOOOOOUUUUYaaaaaaceeeeiiiinoooooouuuuyy');
    RETURN (OUTPUT_STR);
    end;
    --now I query for which name stats with  a P%:
    --Below query gets me unexpected result of one row as I am using Oracle Text where each word is parsed for search using CONTAINS...
    SQL> select * from text_test where contains(name,'P%')>0;
    NAME
    PAUL HERNANDEZ
    CHRISTOPHER Phil
    --Below query gets me the right and expected result of one row...
    SQL> select * from text_test where name like 'P%';
    NAME
    PAUL HERNANDEZ
    --Below query gets me the right and expected result of one row...
    SQL>  select * from text_test where remove_dia(name) like remove_dia('P%');
    NAME
    PAUL HERNANDEZMy entire need was only to be able to do a search that bypasses diacritical characters. To implement Oracle Text for that reason, I am wondering if that was the right choice! More so when I am now finding that the functionality of LIKE is not available in Oracle Text - the Oracle text search are based on tokens or words and they are different from output of the LIKE operator. So may be should I have just used a simple function like below and used that for my purpose instead of using Oracle Text:
    This function (remove_dia) just removes the diacritical characters and may be for my need this is all that is needed. Can someone help to review that given my need I am better of not using Oracle Text? I need to continue using the functionality of Like operator and also need to bypass diacritical characters so the simple function that I have meets my need whereas Oracle Text causes a change in behaviour of search queries.
    Thanks,
    OrauserN

    If all you need is LIKE functionality and you do not need any of the complex search capabilities of Oracle Text, then I would not use Oracle Text. I would create a function-based index on your name column that uses your function that removes the diacritical marks, so that your searches will be faster. Please see the demonstration below.
    SCOTT@orcl_11gR2> CREATE TABLE TEXT_TEST
      2    (NAME  VARCHAR2(255 BYTE))
      3  /
    Table created.
    SCOTT@orcl_11gR2> Insert all
      2  into TEXT_TEST (NAME) Values ('muller')
      3  into TEXT_TEST (NAME) Values ('müller')
      4  into TEXT_TEST (NAME) Values ('MULLER')
      5  into TEXT_TEST (NAME) Values ('MÜLLER')
      6  into TEXT_TEST (NAME) Values ('PAUL HERNANDEZ')
      7  into TEXT_TEST (NAME) Values ('CHRISTOPHER Phil')
      8  select * from dual
      9  /
    6 rows created.
    SCOTT@orcl_11gR2> CREATE OR REPLACE FUNCTION remove_dia
      2    (p_value   IN VARCHAR2,
      3       p_doUpper IN VARCHAR2 := 'Y')
      4    RETURN VARCHAR2 DETERMINISTIC
      5  IS
      6    OUTPUT_STR VARCHAR2(4000);
      7  begin
      8    IF (p_doUpper = 'Y') THEN
      9        OUTPUT_STR := UPPER(p_value);
    10    ELSE
    11        OUTPUT_STR := p_value;
    12    END IF;
    13    RETURN
    14        TRANSLATE
    15          (OUTPUT_STR,
    16           'ÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝàáâãäåçèéêëìíîïñòóôõöøùúûüýÿ',
    17           'AAAAAACEEEEIIIINOOOOOOUUUUYaaaaaaceeeeiiiinoooooouuuuyy');
    18  end;
    19  /
    Function created.
    SCOTT@orcl_11gR2> show errors
    No errors.
    SCOTT@orcl_11gR2> CREATE INDEX text_test_remove_dia_name
      2  ON text_test (remove_dia (name))
      3  /
    Index created.
    SCOTT@orcl_11gR2> set autotrace on explain
    SCOTT@orcl_11gR2> select * from text_test
      2  where  remove_dia (name) like remove_dia ('mü%')
      3  /
    NAME
    muller
    müller
    MULLER
    MÜLLER
    4 rows selected.
    Execution Plan
    Plan hash value: 3139591283
    | Id  | Operation                   | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                           |     1 |  2131 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEXT_TEST                 |     1 |  2131 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | TEXT_TEST_REMOVE_DIA_NAME |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('mü%'))
           filter("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('mü%'))
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2> select * from text_test
      2  where  remove_dia (name) like remove_dia ('P%')
      3  /
    NAME
    PAUL HERNANDEZ
    1 row selected.
    Execution Plan
    Plan hash value: 3139591283
    | Id  | Operation                   | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                           |     1 |  2131 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEXT_TEST                 |     1 |  2131 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | TEXT_TEST_REMOVE_DIA_NAME |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('P%'))
           filter("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('P%'))
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2>

  • Is mpd the right solution for me?[more questions!]

    My scenario is as follows:
    I'm at work all day, but I want to listen to my music at home. I have complete access to SSH to my computer at home, I can open ports and all that from here.
    When I was running windows, I could just TS into my box, and hit play on my media player and it would bring the audio to my work machine. It was a big laggy, but ok.
    MPD seems like it could be a great solution.
    Can I set up MPD to run on my machine at home, and then connect with a client on my windows machine at work to it? Is that how it works?
    Or would I have to setup a shoutcast?
    What do you guys do at for music at work? or are you more productive than that?
    thanks!
    Last edited by cschep (2007-04-04 21:46:23)

    Thanks for following up guys. I ended up going with Icecast + MPD and it works a TREAT. It is exactly what I was looking for.
    Is there anyway to configure it to be an .mp3 stream instead of .ogg? No real need, just wondered.
    Also, is icecast setup to accept more than one user at a time by default? or does that take a lot of tweaking? I tried to have a friend connect real fast to mine while i was on and the "peak clients" went to 2, but he never got connected.
    Maybe my upstream just wasn't fast enough..
    oh! Also... maybe I should ask in another thread, but as long as we're talking about audio and .ogg and stuff..
    Does anyone know of a good batch converter from .mp3 to .ogg? I think I'm going to make the switch to a fully open standard.
    I have a few cd's that I purchased from the apple store as well, I don't want to break forum rules by talking about breaking drm, but I could just burn then and rip them to .ogg, or does anyone know of a way to convert those straight to .ogg?
    Again, if that's not a question that should be discussed, feel free to ignore/chastise me.
    thanks guys!
    oh, TU: great burglar joke, i should keep a directory full of gunfire and other related movie sounds to scare the piss out of people at random.
    Last edited by cschep (2007-04-04 21:47:23)

  • What is the right iTouch for ABA data collection, filming, behavior apps, etc?

    I am a behavior specialist in a public school and currently I am looking to write a grant for some iTouches to take ABA data, film students for video modeling, and apps such as behavior Tracker and things of that nature. I am not sure what the best iTouch is for my situation (I am trying to be economical, but get what I need.) Plus I have to write an itemized list of expenditures for the iTunes. Any help, esp. from people who use iTouches for th is reason) would be greatly appreciated.

    There is only one model of iPod touch currently offered. The only difference is storage capacity, and if you plan to be taking videos of students, having more capacity would be beneficial; get as much as you can. I'd ask at least for 32GB versions in the grant.
    Regards.

  • Is DPS the right direction for me?

    Hello - I'm currently currently teacher technology at the high school level. We have just received the CC upgrade. I had the thought of being able to have apps for the school created by the students. I was wondering if it possible to have apps for both Android and Apple of the InDesign projects the students have created, such as the weekly school newsletter and maps of the school for freshman and etc. I figured this would be the easiest way to get content like this to the students and staff (or maybe it's not, open for all ideas!)
    I'm trying to wrap my head around the whole DPS/app store thing and am getting very overwhelmed!
    So my questions are;
    1 - Is it possible to have the interactive PDFs be created into "apps" for both stores?
    2 - Is this included with the CC subscription my school has purchased?
    If anyone can lead me in the right direction that would be extremely helpful. Thank you!

    DPS probably isn't the right solution for you. CC only includes support for building an app for iPad, not Android. Furthermore, Apple is stringent about the kinds of apps it allows in its store, and you will have and I suspect you'll have a hard time getting Apple to approve a basic school newsletter app approved.
    Honestly I think your type of content is better suited to a blog or a website built using Muse.
    Neil

  • I wonder to know what is the enterprise solution for windows and application event log management and analyzer

    Hi
    I wonder to know what is the enterprise solution for windows and application event log management and analyzer.
    I have recently research and find two application that seems to be profession ,1-manageengine eventlog analyzer, 2- Solarwinds LEM(Solarwind Log & Event Manager).
    I Want to know the point of view of Microsoft expert and give me their experience and solutions.
    thanks in advance.

    Consider MS System Center 2012.
    Rgds

  • Is AIR the right platform for photo management app?

    I need to build a web-enabled photo management app with the following features:
    server sign-in and authentication
    photos fetched from server via REST API/XML
    tag, rate (i.e. 1-5 stars), and edit properties for photos, with data updated on or sync-ed with server
    all the RIA UI goodies, like multi-select, drag/drop, AJAX-style auto-complete, overlay icons on thumbs,
    all the photo management goodies, like fast scrolling, histograms, resizeable thumbnail
    able to handle 1000s of photos in 1 album (server is on a local LAN)
    One key issue is how the platform loads an album with 1000s of photos in the background, and scrolls through pages of thumbnails. I tried before to build this in Java and all the threading issues were too much for me.
    I have good experience with CakePHP & MySQL and some experience with Java, but I am resigned to learning a new platform/libs. I just need to know where to invest my learning time.
    Can some experienced users pipe in on whether AIR the right platform for me to build on? Should I use Javascript etc, or learn Flex?
    Is there anything out there I could just build on? I'm also considering building on Yahoo YUI or Google GWT as an alternative.
    thx.
    m.

    Let me clarify the issue of 1000s of photo. I the app would need to load thumbnails for 1000s of photos, but only open the full size photo when the thumbnail is clicked. probably 120px max dimension, pre-rendered on the server. I have tried to open 1000 thumbs in a plain html page, and the photos come quickly - as if from the local file system - but it does seem to slow the UI until everything is loaded.
    Is there a way for AIR to load the thumbnails is the current "view port" immediately, and the rest in the background? YUI has an imageLoader class that does something like this. Could I use this within AIR?
    The other requirement is to manage a DB of tags ratings, and other attributes for the photos, but I can't imagine that is a big challenge. right?

  • SUBINACL - the right tool for the job?

    Here is the scenario:
    DomainA and DomainB
    Full Trust both ways
    Windows File Share in DomainA. Created a windows server in DomainB. Used Quest Migration Manager to migrate users and groups from DomainA to DomianB, keeping SID history. Xcopied files from DomainA server to DomainB server, keeping the permissions from DomainA. 
    Now, the Trust will be broken and we need to change the permissions from [email protected] to [email protected] Thousands of shares and permissions, so can't do manually. So far I have tried SUBINACL in the following context:
    Subinacl /noverbose /output=e:\ntfs_perms.txt /subdirectories e:\Share
    and then
    Subinacl /playfile e:\ntfs_perms.txt
    This changed some of the permissions to DomainB, but removed a lot of permissions.
    Is this the right tool to use for this? Is this the right syntax for this scenario?
    Thanks!
    Mike

    Hi,
    Sorry for the delay reply.
    It seems like you have  enabled the SID history.
    Based on my experience, after we completing the migration, sIDHistory should be removed from migrated principals to minimize management complexity. Microsoft has provided a sample script and
    Powershell command to remove SIDHistory.
    So, please first confirm if the migration process completed or not, then plan to remove the SIDhistory.
    How To Remove SID History With PowerShell
    http://blogs.technet.com/b/ashleymcglone/archive/2011/11/23/how-to-remove-sid-history-with-powershell.aspx
    Before we do this, please backup the system state to making any important change to avoid any risk.  After clearing the SID history, please try to run the subinacl command to check if it
    works.
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Why Can't VZ do the Right Thing For Once and Permit Network Extender Owners to Close their Networks?

    Given the fact that the Network Extender can be set for managed access or open access, clearly it can technologically be configured so that a closed network could be set up so that ONLY those users that are included in the "priority list" could access the Network Extender.
    If I had to guess, Verizon, prefers to benefit from your internet connection and your investment in a network extender by bolstering their network in poor reception areas for all of their customers in the vicinity on us Network Extender owners' backs rather than to do the right thing and permit a customer who has paid for the device as well as their internet access to close the network. 
    I find this sleazy and hope VZW will rethink its approach to this.  We who subscribe to VZW for our cellular service pay the highest rates, on average, in the country for cell service.  We have also paid hundreds of dollars for the Network Extender, and pay for the internet that is used to facilitate the phone calls made through the network extender.
    Once it a while it would be nice if VZW did the right thing for its customers and not blatantly, at least, put their corporate greed about the needs of their customers. 

    I called Verizon tech support, and was informed that there is an option to close the Network Extender. This would allow only numbers on the white list to connect to the extender. Is the information I received incorrect? I spoke to them just the other day. Have you tried to configure the extender recently?
    My post asking for clarification is here: https://community.verizonwireless.com/message/1002928#1002928
    Thanks for any information you can provide.

  • I type in the right password for my ipad but it says its wrong what do i do?

    i type in the right password for my iPad but it says its wrong. what do i do?

    Locked Out, Forgot Lock or Restrictions Passcode, or Need to Restore Your Device: Several Alternative Solutions
    A
    1. iOS- Forgotten passcode or device disabled after entering wrong passcode
    2. iPhone, iPad, iPod touch: Wrong passcode results in red disabled screen
    3. Restoring iPod touch after forgotten passcode
    4. What to Do If You've Forgotten Your iPhone's Passcode
    5. iOS- Understanding passcodes
    6. iTunes 10 for Mac- Update and restore software on iPod, iPhone, or iPad
    7. iOS - Unable to update or restore
    Forgotten Restrictions Passcode Help
                iPad,iPod,iPod Touch Recovery Mode
    You will need to restore your device as New to remove a Restrictions passcode. Go through the normal process to restore your device, but when you see the options to restore as New or from a backup, be sure to choose New.
    You can restore from a backup if you have one from BEFORE you set the restrictions passcode.
    Also, see iTunes- Restoring iOS software.

Maybe you are looking for