TransRelational implementation model (Chris Date)

Last friday I attended a seminar by Chris Date on the TransRelational model, a radical new way of storing relational data inside an RDBMS. This technology promises to speedup query-processing by many-many factors, and has numerous other benefits.
Does anybody know if Oracle is persueing this new technology, and plans to implement this maybe in release 11?

Given that Chris Date and his chum Fabian P{ascal spend a lot of their time bemoaning the stranglehold the giants have on the RDBMS industry I wouldn't bet on it happening any time soon.
Cheers, APC                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Power view couldn't load the model or data source because the data source type is not supported

    Hi,
    I have SQL 2012 standard edition in my local. I have developed SSAS & deployed in local. I have been asked to develop power view report in excel 2013 using this SSAS. But when I tried to do in Excel 2013 professional Plus, I am getting below error:
    Power view couldn't load the model or data source because the data source type is not supported.
    Does power view is supported in standard edition of SQL or it requires Business/Enterprise edition of SQL server?
    Thanks in advance

    What type of SSAS install are you using?
    PowerView in Excel 2013 currently only supports Tabular data sources.
    Only PowerView in Sharepoint 2013 supports both Tabular and Multi-Dim data sources. (provided you have the required Sharepoint and SQL updates installed)
    http://darren.gosbell.com - please mark correct answers

  • Modeling transaction data

    Hello all,
    I have 2 questions that I was hoping to get an answer to:
    Question 1:
    What is the normal way of modeling transaction data with a changing status in a BW system? Are there any links/threads to read?
    I thought that transaction  idata would go into the DSO that any changes to transaction data would be recorded there (very granular) while the aggregation of data would be placed in the cube.
    Question 2:
    For what reason would someone place a navigation attribute in the dimension of a cube?
    TIA
    PS - this is for BI 7.0
    Edited by: Casey Harris on Feb 4, 2008 10:15 PM

    Casey,
    A couple of quick answers that aren't links:
    1)  Ideally BW 7.0 allows you to create an Enterprise Data Warehouse (EDW), where you have granular data loaded to DSO's that then aggregate the data into cubes.    That is what we strive for.  In practice it doesn't always work out that way.  Do some searches on EDW and you should find some info.
    2)  Navigation attributes are essentially links to master data attributes.  By not putting them directly in a cube, you save a little bit of space in the cube.  The most common use of them that we have is when users tell us they want to filter on a field that is not directly in a cube, but is in the master data attributes.  We can then easily make that field a navigation attribute.  Otherwise if you wanted to add the field to the cube, you'd have to reload all the data, which can be quite painful.
    Michael

  • HRXSS_PER_BEGDA  - Implementing Custom Effective Dates with a BADI

    What am I missing?
    We have a need to implement different effective dates based on infotype.
    ( For example, we want changed to IT 210 (US - WithHolding) to be effective
    on the first day of the current (unprocessed) pay period).
    IT 6 (address)  anf 9 Bank will also have there own date logic.
    To this end, we implemented BADI   HRXSS_PER_BEGDA through the config, 
    as ZHRXSS_PER_BEGDA with an implementation class
    of ZCL_IM_HRXSS_PER_BEGDA  .
    To keep it simple, as a first test,  we left the business logic out and just hard coded a date  as follows:
    method IF_EX_HRXSS_PER_BEGDA~DEFAULT_DATE.
      if INFTY = '0006' .
        begda  = '20071001' .
      endif .
    endmethod.
    ( an external breakpoint confirms that the code is reached)
    I would have thought that this would have set the start date to 10/01/2007 .
    When I chang an address through the portal, the effective date, both displayed on the ESS side, and as saved on the backend, is always [today].
    What am I missing?
    ...Mike
    We're running :
    NW 2004S, ESS 1.0 SP09 Web DynProVersion
    with ECC6.0 ERP2005 on the R/3 side

    HI Michael,
    I have similar problem that you described here. I have implemented the BADI - HRXSSPER_BEGDA to default the start date of dependents' infotype (0021) to a date in the past based on employee's hire date. However, the start date is always shown as current date. I have used a simple code as
    IF INFTY = '0021'.
    BEGDA = '20090901'.
    ENDIF.
    I shall be glad if you help me resolve this issue. Thanks a lot.
    Regards,
    Shakir

  • Data Models and Data Flow diagrams.

    Hi  Gurus,
        Can anybody brief me the concept of Data Models and Data Flow Diagrams and their development with illustrations. And is it a responsibility of a Technical or a Functional consultant..i.e to translate Business requirements and functional specifications into technical specifications, data flow diagrams and data models.
    Your valuable answers will be rewarded.
    Thanks in advance.

    Hi,
    Concept of Data Models
    Data model or Data modelling is basically how you define or design your BW Architecture based on Business requirements. It deals with designing and creating a effcient BW architecture sticking to standard practices.
    Multi-Dimensional Modeling with SAP NetWeaver BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    /people/githen.ronney3/blog/2008/02/13/modeling-strategies
    Modeling the Data Warehouse Layer with BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3668618d-0c01-0010-1ab5-aa75c3a4dfc2
    /people/gilad.weinbach2/blog/2007/02/23/a-beginners-guide-to-your-first-bi-model-in-nw2004s
    Data Flow Diagrams
    This show the path of data flow for each individual object in BW. How data dets loaded into that object and how it is going out of the object etc.
    Right click on the data target > show data flow .
    It shows all the intermdeiate layer through which data comes into that particular object.
    Responsibility of a Technical or a Functional consultant
    This is done genrally in the designing phase itself by a Senior Technical Consultant with the help of a Functional consultant or a Techno=Functional consultant interacting with Business.
    Hope this helps.
    Thanks,
    JituK

  • Differences between operational systems data modeling and data warehouse da

    Hello Everyone,
    Can anybody help me understand the differences between operational systems data modeling and data warehouse data modeling>
    Thanks

    Hello A S!
    What you mean is the difference between modelling after normal form like in operational systems (OLTP) e. g. 3NF and modelling a InfoCube in a data warehouse (OLAP)?
    While in a OLTP you want to have data tables free of redundance and ready for transactions meaning writing and reading few records often, in an OLAP-system you need to read a lot of data for every query you do on a database. Often in an OLAP-system you aggregate these amounts of data.
    Therefore you use another principle for these database scheme. This is called star schema. This means that you have one central table (called fact table) which holds the key figures and have keys to another tables with characteristics. These other tables are called dimension tables. They hold combinations of the characteristics. Normally you design it that your dimensions are small, so the access  on the data is more efficent.
    the star scheme in SAP BI is a little more complex than explained here but it follows the same concept.
    Best regards,
    Peter

  • How to model hierarchical data?

    I need a way to model hierarchical data. I have tried using an object so far, and it hasn't worked. Here is the code for the class I made: http://home.iprimus.com.au/deeps/StatsGroupClass.java. As you can see, there are 4 fields: 1 to store the name of the "group", 2 integer data fields, and 1 Vector field to store all descendants. Unfortunately, this this not seem to be working as the Vector get(int index) method returns an Object. This is the error I get:
    Test.java:23: cannot resolve symbol
    symbol  : method getGroupName  ()
    location: class java.lang.Object
          echo("Primary Structure with index 0: " + data.get(0).getGroupName());
                                                            ^
    1 error I figure I can't use the approach I have been using because of this.
    Can anyone help me out?

    Test.java:23: cannot resolve symbolsymbol  : method getGroupName  ()location: class java.lang.Object      echo("Primary Structure with index 0: " + data.get(0).getGroupName());                                                        ^1 errorYou need to cast the return value from get(0):
    ((YourFunkyClass)data.get(0)).getGroupName();Be aware that you're opening yourself up to the possibility of a runtime ClassCastException. You could consider using generics if you can guarantee that the data Vector will contain only instances of YouFunkyClass.
    Hope this helps

  • How to model this data

    In the sample data below, there are rows that contain header names, followed by a row with the data.
    The problem is that some of the "header" column values change.  They represent "sizes" boxes.
    Name | BoxType | Color | Qty | 45 | 11 | 13.5
    Compx | F | Red | 32 | 1 | 0 | 34
    Name | BoxType | Color | Qty | 75 | 11 | 12.5
    QuickMartZ | G | Blue | 68 | 13 | 7 | 77
    Name | BoxType | Color | Qty | 75 | 11 | 45
    QuickMartZ | F | Blue | 22 | 17 | 72 | 12
    How could I model this data or re-shape it into a schema such as
    Table
    =========
    AccountName
    BoxType
    Color
    Qty
    Size
    Ultimately I need to be able to extract a "rolled up" count of the boxes by size and their quantity
    Something like this
    AccountName
    BoxType
    Color
    Qty_Size1
    Qty_Size2
    Qty_Size3
    Qty_Size4
    Qty_Size5
    Qty_SizeN...

    Without some value which links the two rows together (other than the order of rows how do we know the box in the line above Compx belongs to it?) I don't think this is going to be possible as a set based solution.
    You could use a cursor to move through the rows RBAR:
    DECLARE @table TABLE (name VARCHAR(20), boxType VARCHAR(20), color VARCHAR(20), qty VARCHAR(4), col1 INT, col2 INT, col3 FLOAT)
    INSERT INTO @table (name, boxType, color, qty, col1, col2, col3)
    VALUES
    ('Name', 'BoxType', 'Color', 'Qty', 45, 11, 13.5),
    ('Compx', 'F', 'Red', '32', 1, 0 , 34),
    ('Name', 'BoxType', 'Color', 'Qty', 75, 11, 12.5),
    ('QuickMartZ', 'G', 'Blue', '68', 13, 7 , 77),
    ('Name', 'BoxType', 'Color', 'Qty', 75, 11, 45),
    ('QuickMartZ', 'F', 'Blue', '22', 17, 72, 12)
    DECLARE @name VARCHAR(20), @boxType VARCHAR(20), @color VARCHAR(20), @qty VARCHAR(4), @col1 INT, @col2 INT, @col3 FLOAT,
    @pname VARCHAR(20), @pboxType VARCHAR(20), @pcolor VARCHAR(20), @pqty VARCHAR(4), @pcol1 INT, @pcol2 INT, @pcol3 FLOAT
    DECLARE @products TABLE (name VARCHAR(20), boxType VARCHAR(20), color VARCHAR(20), qty VARCHAR(4), size1 FLOAT, size2 FLOAT, size3 FLOAT)
    DECLARE @boxes TABLE (name VARCHAR(20), boxType VARCHAR(20), size1 FLOAT, size2 FLOAT, size3 FLOAT)
    DECLARE c1 CURSOR
    FOR SELECT *
    FROM @table
    OPEN c1
    FETCH c1 INTO @name, @boxType, @color, @qty, @col1, @col2, @col3
    WHILE @@FETCH_STATUS <> -1
    BEGIN
    IF @name = 'name'
    BEGIN
    SET @pname = @name
    SET @pboxType = @boxType
    SET @pcolor = @color
    SET @pqty = @qty
    SET @pcol1 = @col1
    SET @pcol2 = @col2
    SET @pcol3 = @col3
    END
    IF @name <> 'name'
    BEGIN
    INSERT INTO @products (name, boxType, color, qty, size1, size2, size3) VALUES (@name, @boxType, @color, @qty, @col1, @col2, @col3)
    INSERT INTO @boxes (name, boxType, size1, size2, size3) VALUES (@name, @boxType, @pcol1, @pcol2, @pcol3)
    END
    FETCH c1 INTO @name, @boxType, @color, @qty, @col1, @col2, @col3
    END
    CLOSE c1
    DEALLOCATE c1
    SELECT *
    FROM @products
    SELECT *
    FROM @boxes

  • Data Modeler Logical Data Type confusion!

    I don't get it.
    When defining a logical model, I want to assign data types to the attributes in my model.
    I understand a logical datatype like money, and that a logical datatype might be implemented differently in different databases. The concept makes perfect sense to me.
    I pick a datatype from the Logical Types.
    It ignores the logical datatype that I picked and puts in another datatype instead. I'm guessing that it's a physical mapping. I would understand the logical-to-relational mapper doing that , but I don't understand it happening at this point in the model life cycle.
    Let's say I pick Money. It puts Double into the logical datatype, not money.
    If I pick Date or DateTime, it puts Date into the logical datatype, so what is the point of giving me two types to pick from?
    Seems kind of wonky.

    David,
    We will publish soon a document on how the data types work.
    If you go to tools, Types administration you will see that a logical type is mapped to a native type and a native type is mapped to a logical type.
    The logical type MONEY is mapped to DOUBLE for Oracle.
    Types and domain files can be customised.
    One could argue that the logical name MONEY should show up and not the native name. Showing the native name has the benifit that you see what the logical type is mapped to. Both approaches could have their supporters, no?
    Kind regards,
    René De Vleeschauwer
    SQL Developer Data Modeling.

  • Join data from various model / transpose data in models

    Hello there,
    is there a best practice or pattern to re-use data from a model when the original layout does not match the control's necessities right? Consider the following example:
    var oData = {
      "persons": [
        {"name": "Sonja Software", "phones": ["12345", "54321"]},
        {"name": "Conrad Coder", "phones": []},
        {"name": "Mike Mailinglist", "phones": ["6789"]},
        {"name": "Hugo Hacker", "phones": ["54321"]}]};
    This could be easily used to set up a table for example, see this gist here.
    What is the best way to also show a table of all telephone numbers? I could manually go over the data and set up another model to achieve this:
    var oDataTransposed = {
      "phones": [
        {"number": "12345", "persons": ["Sonja Software"]},
        {"number": "54321", "persons": ["Sonja Software", "Hugo Hacker"]},
        {"number": "6789", "persons": ["Mike Mailinglist"]}]};
    But this would make the two models get out of sync once, e.g. a name is changed. The above mentioned gist has three files. The `exampleTransposed.js` contains a manual inversion making the phone numbers the primary items. But there is certainly a more elegant way to achieve this, isn't it? Maybe by creating a model which itself is bound to the original model by some kind of databinding or some sort of calculated properties, see this comment on knockoutjs.
    M.
    PS: Maybe one should also add a
    {"number": undefined, "persons": ["Conrad Coder"]}
    In order to not to loose some of the persons but this is just a minor detail.

    Hello Martin,
    in the example https://gist.github.com/ricma/cf81829181cfd4e86354 the JSONModel is used. JSONModel is a client-side model. Each of the two tables (person table, phone table) needs its own data and model. In order to synchronize models after editing a table cell, two conversion functions are needed. One function converts person data into phone data, the other converts phone data into person data (see example for implementation):
    function makePhoneList(oPersonList) {
    function makePersonList(oPhoneList) {
    Furthermore, two callback functions are needed. These callback functions shall be called whenever a table cell is modified:
    function personModelChanged(oControlEvent) {
      oPhoneData = makePhoneList(oPersonData);
      oPhoneModel.setData(oPhoneData);
    oPersonTable.addColumn(new sap.ui.table.Column({
      label: new sap.ui.commons.Label({text: "Name"}),
      template: new sap.ui.commons.TextField({
        value: "{personModel>name}",
        change: personModelChanged})}));
    function phoneModelChanged(oControlEvent) {
      oPersonData = makePersonList(oPhoneData);
      oPersonModel.setData(oPersonData);
    oPhoneTable.addColumn(new sap.ui.table.Column({
      label: new sap.ui.commons.Label({text: "Phone Number"}),
      template: new sap.ui.commons.TextField({
        value: "{phoneModel>number}",
        change: phoneModelChanged})}));
    The solution works but has some disadvantages:
    data and models have to be synchronized manually
    for n models, 2n-2 conversion functions are needed
    every modification causes creation of at least n-1 data structures and models
    Best regards,
    Frank

  • IDES Model Company Data

    Hi:
    I have recently installed SAP IDES 4.7, with a view to getting to grips with the SD Module (Config). Anyhow I was just wondering if anyone knows of any documentation and/or step-by-step guide for setting up a model company? By this I mean a model company, with all the master data, etc... thereby allowing for the SD modules to be implemented?
    I suppose one could always make up a company, however I sure it would be far easier to have this aready prepared!
    Thanks

    Hello Oliver,
    Follow the <a href="http://help.sap.com/saphelp_46c/helpdata/en/af/fc4f35dfe82578e10000009b38f839/frameset.htm">link</a>.
    It is the SAP IDES documentation for all the demo scenarios that you can run in it. In your case I suggest following the Logistics then Sales and distribution path. It will lead you to a menu with a list of scenarios and data to use to replicate those scenarios.
    Hope it helps.

  • InfoCube Modelling-Adding data from different ODS's on to the Infocube

    Hi Experts,
    I am new to SAP BI. I have a basic doubt on Modelling the InfoCube.
    In our requirement, I have to populate data from 9 custom SAP Tables on to 9 ODS's. And, then take these data on to Infocubes.
    And, they want to reduce the number of cubes as much as possible. So, I have to combine the data from different ODS's and build 2-3 Infocubes.
    For Example.
    I am going to combine 5 ODS's data on to 1 CUBE based on Delivery number...
    there are 5 ODS with common key Delivery number. And, suppose I have added some set of fields from ODS1.
    And, now when I add other set of fields from the second ODS, WHAT WILL HAPPEN TO THE 'Delivery Number' field ??
    I will make it clear.
    I have a record in CUBE already containing Fields- : Delivery no, field_a,  field_b, field_c, field_d. Where the 'Delivery no =112333'. This record comes from ODS1.
    Now, I want to add data data from ODS2, containg fields -: Delivery no, field_e, field_f, field_g, field_h.
    And, what happens to the already existing record in CUBE with 'Delivery no = 11233'. ?
    Will the value in this info-object get overwritten ?
    OR.. will it combine the data from both the ODS's and show it as ONE record ???
    Please advice ... How will I solve this scenario ?
    Thanking You in Advance
    Shyne Sasimohanan

    Answer for your question and the suggestion.
    the data will look like as given below
    Delivery no, field_a, field_b, field_c, field_d, field_e, field_f, field_g, field_h
    11233           1           1           1          1            0           0            0            0
    11233          0           0          0             0           1           1           1          1 
    but the best way, according the design standards is creating another DSO on the top of all the DSO's and combine all the data in that DSO and send the data to Infocube. then the data will be shown as below.
    Delivery no, field_a, field_b, field_c, field_d, field_e, field_f, field_g, field_h
    11233           1           1           1          1            1           1           1          1 
    Regards,
    Siva A

  • Need help to implement a small data warehouse or datamart

    Hi all,
    we want to improve our reporting activities, we have 3 production and relational oracle databases and we want to elaborate 1 database as reporting database with historized and aggregated data responding to our reporting needs.
    The database we are using are oracle database 10g.
    actually we still are doing query to retrieve informations from databases for reporting purpose, but fropm inetrnet search i know that we can implement datamart or datawarehouse to group all aggregated information for reporting.
    The information i need are: is there a tools in Oracle for Datawarehousing, Is Oracle Warehouse Builder is the right tools as the sources of our data are all from Oracle database and some flat files.
    Could yo advise what i'm going to use for that kind of reporting needs, can i use Oracle warehosue builder to develop ETL ...
    Do i need license to use Oracle Warehose Builder
    Thanks,

    As simple answer to all your questions: YES
    Yes, Oracle warehouse builder could be a tool to use.
    Yes, Orace warehouse builder needs a license.
    Besides that you also need a license for that extra Database.
    if you already have that, and you have the queries with which you now retrieve data, you can always choose the cheap way and build materialized views with these queries.
    Keep in mind however that a materialized view ( of snapshot ) does not support inline selects.
    HTH,
    FJFranken
    My Blog: http://managingoracle.blogspot.com
    P.S. If this answers your question, please set the thread to answered and award the points. It is appreciated

  • Different LOV behavior between SQL query data model and data template

    I have noticed different behavior when using parameters linked to list of values (LOV) of type menu with the multiple selection option enabled and a SQL query data model vs a data template. Here's the example because that first sentence was probably really confusing.
    SQL Query:
    select
    plmc.MonthCode, plmc.ModalityDim, plmc.ModalityName,plmc.RegionDim
    from
    DataOut.dbo.PatientLabMonthlyCross plmc
    where
    plmc.MonthCode = 200202
    and plmc.RegionDim = 1209
    and 1 =
    case
    when coalesce(:modalityDim,null) is null
    then 1
    else
    case
    when plmc.ModalityDim in (:modalityDim)
    then 1
    else 0
    end
    end
    Putting BI Publisher into debug mode, defining a data model of type SQL Query, defining a parameter called :modalityDim linked to a LOV that allows multiple selections, and selecting a couple of values from the LOV the output of the prepared statement is:
    [081607_122647956][][STATEMENT] Sql Query : select
    plmc.MonthCode,
    plmc.ModalityDim,
    plmc.ModalityName,
    plmc.RegionDim
    from
    DataOut.dbo.PatientLabMonthlyCross plmc
    where
    plmc.MonthCode = 200202
    and plmc.RegionDim = 1209
    and 1 =
    case
    when coalesce(?,?,null) is null
    then 1
    else
    case
    when plmc.ModalityDim in (?,?)
    then 1
    else 0
    end
    end
    [081607_122647956][][STATEMENT] 1:6
    [081607_122647956][][STATEMENT] 2:7
    [081607_122647956][][STATEMENT] 3:6
    [081607_122647956][][STATEMENT] 4:7
    [081607_122654713][][EVENT] Data Generation Completed...
    [081607_122654713][][EVENT] Total Data Generation Time 7.0 seconds
    Note how the bind variable :modalityDim was changed into two parameters in the prepared statement.
    When I use this same SQL Query in a data template the output is:
    [081607_012113018][][STATEMENT] Sql Query : select
    plmc.MonthCode,
    plmc.ModalityDim,
    plmc.ModalityName,
    plmc.RegionDim
    from
    DataOut.dbo.PatientLabMonthlyCross plmc
    where
    plmc.MonthCode = 200202
    and plmc.RegionDim = 1209
    and 1 =
    case
    when coalesce(?,null) is null
    then 1
    else
    case
    when plmc.ModalityDim in (?)
    then 1
    else 0
    end
    end
    [081607_012113018][][STATEMENT] 1:'6','7'
    [081607_012113018][][STATEMENT] 2:'6','7'
    [081607_012113574][][EXCEPTION] java.sql.SQLException: Syntax error converting the nvarchar value ''6','7'' to a column of data type int.
    Note the exception because it is trying to convert the multiple parameter values.
    Am I doing something completely wrong here? I really need to use a data template because I will need to link a couple of queries together from different database vendors.
    -mark

    This is for 10.1.3.4 - because in 11g every SQL query is automatially part of a data model.
    In 10g SQL query is for simple unrelated SQL queries.
    If you need to use advance features such as:
    a) multiple SQL queries that are joined in master-detail relation ships
    b) before/after report triggers
    Then you will need to use the data template, which is an XML description
    of the queries, links, and PL/SQL calls.
    hope that helps,
    Klaus

  • Data Modeler: Relational data model questions

    1. Can a different notation be specified for relational data models' constraints? Specifically, I'd like crow's feet. BTW, the docs show crow's feet and parent pointer (with the arrowhead), but there's no such thing in the actual modeler.
    2. Is there any way to manually route FK constraints lines?
    3. When forward engineering from logical, is there any way to indicate the preferred name for keys and indexes (primary, unique, foreign)?
    4. Mandatory/optional indicator on tables: what exactly does 'N' or 'A' stand for? I can understand 'N' meaning "Not optional", but 'A'? Wouldn't it be simpler to use '*' and 'o' like in the logical?
    Man, do I ever miss Designer!
    Thanks,
    Patrick

    Here's one more question:
    I've transformed several super/sub entities to relational, and some of the tables do not allow me to open Properties (on the table). I can use the navigator to open column properties, but cannot open table properties (neither from diagrammer nor from navigator). Some of the tables are two or three subtype levels deep, and I haven't figured out why some open and some don't.

Maybe you are looking for

  • One Orchestration for Multiple ZIP file FTP Solution.

    Problem: Zip files are being generated from multiple locations on my network. I do not need to see any information in the Zip files. My goal is to pick up the files and transfer them to various external FTP servers. Solution: BizTalk picks up the zip

  • How to I add options to the Context Menu for html files under Site Content

    Currently in Web Page Composer...  if a developer wants to get the Target URL to an html page in a WPC site (i.e. under the Site Content folder) they have to take several steps...   Content Menu -> Details -> Settings -> Properties -> Access Links ->

  • Text shift after publishing

    I am having a problem with my text shifting and some colors changing after I publish my website. The problem seems to be on the navigation bar toward the top of the page. It changes color to a darker shade, the text is not centered, and when you navi

  • How do I delete apps???

    I need to delete some apps on my ipad but when I press them and they wiggle, there is no "X" to press.  They are not apple apps - but just silly app store kid games.  Please help!!

  • WSDL Read error while testing through HTTP analyzer Jdev 11g

    Hi, I am trying to create and test JAX WS web service using the wizard Web service from WSDL in Jdeveloper 11g. The web service got created successfully. But when I am trying to test that web servcie with HTTP analyzer it is displaying a message box