Create pagination for table contain more than 2 PK

Guys;
Is there any way to use the wizard for creating FORM PAGINATION for tables having more than 2 PK??
Regards;

Hi dimitri;
after the user login & start using this form thiswhat should happen to these 3 PKs:
-1st PK will be a unique number for the questions.
-2nd PK should be the ID for each survey ( created as global variable in Zero Page ).
-3rd PK will be the division for this user (also created as a global variable in ZERO PAGE)..so this is the only field which will nt be changed while the user is logged..
i tried to use the 1st 2 PKs...& set the last one to get a default value..it worked in the begining but gave me the 1st record for each Survey but my problem that i couldnt navigate to the 2nd record..& after that once i log out & relogin it willnt give me anything.
Regards;
ehammad

Similar Messages

  • Hi, I tried to create a slideshow that contain more than 500 photos. Now, trying to open / execute / select the slideshow, iPhoto hangs with the spinning coloured wheel and I have only to force the deletion from System Preference. Any suggestion?

    Hi, I tried to create a slideshow that contain more than 500 photos. Now, trying to open / execute / select the slideshow, iPhoto hangs with the spinning coloured wheel and I have only to force the deletion from System Preference. I already tried to rebuild the iPhoto Library (all the possible options) and tried to update teh version from 9.4.3 to 9.5 but the problem still persists: is there a way to delete this slideshow without selecting it ? How can I solve the problem, please?
    Kind Regards

    Can you restore your iPhoto Library from a backup version, that was created before you added the slideshow?
    If not, have you tried to rebuild the slideshow with iPhoto Library Manager?
    As described by Terence Devlin here:  Re: iphoto library was created with an unrelased version of iphoto please quit and ugrade library by opening it in iphoto 2 or iphoto 4

  • Spatial index creation for table with more than one geometry columns?

    I have table with more than one geometry columns.
    I'v added in user_sdo_geom_metadata table record for every column in the table.
    When I try to create spatial indexes over geometry columns in the table - i get error message:
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
    ORA-06512: at line 1
    What is the the solution?

    I'v got errors in my user_sdo_geom_metadata.
    The problem does not exists!

  • How to create an iBOT which contains more than one dashboard page

    Hi,
    I want to create an iBOT which delivers the content in HTML or PDF format, but to contain more than one dashboard page. The idea is that I have several dashboards and I want to send by e-mail (by iBOT) most important pages from different dashboards.
    Is it possible to send by a single e-mail different pages from different dashboards? I cannot find an way to accomplish this in the content of delivery in iBOT.
    Thank you.

    Hi,
    Yes, I can add them to a briefing book, but the clients that will receive it by e-mail should have the briefing book reader in order to view the contents. Is there any way to send the briefing book in a PDF format?
    Thank you.

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • How can we create a table with more than 64 fields in the default DB?

    Dear sirs,
    I am taking part in the process of migrating a J2ee application from JBoss to SAP Server. I have imported the ejb project.
    I have an entity bean with 79 CMP fields. i have created the bean and created the table for the same also. but when i tried to build the dictionary, i am getting an error message as given below,
    "Dictionary Generation: DB2:checkNumberOfColumns (primary key of table IMP_MANDANT): number of columns (79) greater than allowed maximum (64) IMP_MANDANT.dtdbtable MyAtlasDictionary/src/packages"
    Is it mean that we can not create tables with fields more than 64?
    How can i create tables with more than 64 fields?
    Kindly help,
    Thankyou,
    Sudheesh...

    Hi,
      I found a link in the help site which says its 1024 (without key 1023).
    http://help.sap.com/saphelp_nw04s/helpdata/en/f6/069940ccd42a54e10000000a1550b0/content.htm
      Not sure about any limit of 64 columns.
    Regards,
    S.Divakar

  • Not able to create a table with more than 64 fields in dictionary

    Hi,
    I have created a java dictionary in Netweaver Developer studio. I have to create a table with more than 64 fields in it. I was able to create the table in dictionary, but when i tried to build it, I am getting an error message as "more than 64 fields are not allowed". If i create the table with 64 fields or below 64 fields, i can build the dictionary and deploy it.
    That is, when i create a table with more than 64 fieds, I am not able to compile the dictionary, But if i reduce the fields to 64 or below, i can compile the dictionary.
    Kindly help me to solve the problem.
    Regards,
    Sudheesh

    Hi,
    Sudheesh,as far as I am aware creating of fields in the table actually depends on the total width of table that can be used for various Vendors.
    So I actually tried out creating a table with too many fields,and I am reproducing the errors which I have obtained -
    <i>Error               Dictionary Generation: <b>DB2:checkWidth TMP_1: total width of table (198198 bytes) greater than allowed maximum (32696 bytes)</b>     TMP_1.dtdbtable     TestDictionary/src/packages     
    Error               Dictionary Generation: <b>DB4:Table TMP_1: fixed length: 198366 (32767).</b>     TMP_1.dtdbtable     TestDictionary/src/packages     
    Error               Dictionary Generation: <b>DB6:checkWidth TMP_1: total width of table (297200) including row overhead is greater than the allowed maximum for 16K tablespaces .</b>     TMP_1.dtdbtable     TestDictionary/src/packages     
    Error               Dictionary Generation: <b>MSSQL:checkWidth TMP_1: total width(198215) greater than allowed maximum (8060)</b>     TMP_1.dtdbtable     TestDictionary/src/packages     
    Error               Dictionary Generation: <b>SAPDB:checkWidth TMP_1: total width(198297) greater than allowed maximum (8088)</b>     TMP_1.dtdbtable     TestDictionary/src/packages     
    Error               Dictionary Generation: Table TMP_1 is not generated     TMP_1.dtdbtable     TestDictionary/src/packages     </i>
    I hope you can understand what the errors state.I am trying to create a table whose total width(sum of width all columns) is greater than the maximum allowed for various Vendors,such as DB2,MSSQL,SAPDB etc.
    I hope this answer helps you create your table suitably
    Regards,
    Harish
    (Please award points if this answer has been usefull)

  • Can I create a table with more than 40 columns ?

    Hello,
    I need to organize my work with a table and need about 66 columns. I am not very good with Numbers but I know a little bit of Pages. I cannot find a way to create a table with more than 40 columns... Is it hopeless ? (Pages '08 v. 3.0.3)
    Thank you all

    It's never too late to try to teach users that the Search feature is not only for helpers.
    The number of columns allowed in Numbers is not a relevant value when the question is about Pages '08 tables.
    I tried to copy a 256 columns Numbers table into a Pages '09 document.
    I didn't got a table but values separated by TABs.
    I tried to convert this text range in a table and got this clear message :
    If I remember well, in Pages '08, the limit is 40 columns.
    It seems that you are a specialist of inaccurate responses, but I'm not a moderator so I'm not allowed to urtge you to double check what you wrote before posting.
    Yvan KOENIG (VALLAURIS, France) vendredi 29 avril 2011 14:57:58
    Please :
    Search for questions similar to your own before submitting them to the community

  • Can i  create IN/UP/DE Trigger on a existing table having more than 2L rows

    Hi All,
    Can i create a INSERT/UPDATE/DELETE Trigger on a existing table having more than 2 lacs records ? if yes (is that works for only some kind of triggers? or for all ?). if not, tell me the reasons and limitations with guidelines.
    Thanks in Advance.

    Hi,
    Sure; you can create new triggers for tables that already have rows. The type of trigger and the number of rows already in the table don't matter.
    Try it. If the trigger works with a small table in your Development database, then there's no reason to expect it won't work on a larger table in your Production datbase.

  • Is there any way  to create table  of more than 30 char length name

    hi all,
    Please tell me is there any way to create table of more than 30 char length name in oracle 10g
    Regards

    Hi,
    If you want table name to be more than 30 Char.
    I am sure,your naming convention is not upto the mark.
    Its not possible in 10g as well as in 11g.
    Thanks
    Yogesh Nagle
    India

  • How can i join more than 20 tables which contains more than 5 lacks records

    how can i join more than 20 tables which contains more than 5 lacks records

    If you're trying to join 20 tables I would check:
    - Are all the joins necessary. It's easy sometimes to just join to another table because you're unsure as to whether it's required.
    - What sort of application is it? 20 joins seems a lot to me. Are you trying to achieve too much with one query? Is it possible to break the problem down?
    - If it is necessary to join so many tables then force the use of hash joins in the query, especially if you're processing a lot of data and want the best throughput. If you want a quicker response, this will not be a appropriate.

  • Populating more than one table and more than one field

    I need some suggestions and this forum has always been a great source of good advice.
    I have a web form at the following location: http://www.webdevpractice.com/genoptix/CE/register.php
    Here's what the web form needs to do:
    Send a confirmation email listing seminars the visitor checked on the form.
    Create a similar message on a confirmation page.
    Populate 2 two tables.
    Items 1 and 2 are working fine.
    The advice I need is on how to populate two tables in the database.
    There are three tables:
    ACCOUNTS
    account_id
    first_name
    last_name
    medtech_id
    job_title
    npi
    company
    city
    state
    email
    phone
    contact
    ATTENDANCE
    attendance_id
    account_id
    seminar_id
    SEMINARS
    seminar_id
    seminar
    speaker_first_name
    speaker_last_name
    date
    The web form contains data that need to go into the ACCOUNTS table and the ATTENDANCE table. The challenge is getting the account_id and seminar_id into the ATTENDANCE table. If all the information was inserted properly, I could write a query that revealed who was taking what seminar.
    Inserting data into the ACCOUNTS table is not a problem. I will create another form to insert information into the SEMINARS so that should not be a problem. But inserting the account_id and the seminar_id is what I am wondering about. Also, can more than one record be inserted in a table? If an user checks more that one seminar, each seminar (seminar_id) would need to be inserted in the ATTENDANCE table as separate records along with the account_id. I'm thinking I may have to do this manually. Also, the values for each seminar are their dates and titles. I used these as values to send the confirmations.
    I'm just looking for advice at this point. Is this doable?

    Bregent,
    The table I am wondering about is the ATTENDANCE table. There are two fields in addition to the primary key: account_id and seminar_id. The field I am concern with is the seminar_id which comes from a group of checkboxs on the form. So, one form could create several records. For example, presently there are three seminars that are offered. If the visitor selects all three seminars, that would create three records in the SEMINARS table. So, it might look like this:
    attendance_id     account_id     seminar_id
         1                    1                    1
         2                    1                    2
         3                    1                    3
    My PHP skills are basic. I've done other forms and use PHP in other ways. But I have never had to populate several rows in one table with an array of checkboxes nor have I be able to find an example of this.
    So the advice I am seeking (and perhaps this is premature) is this:
    Can one field from a table populate more than one record?
    Should I set up checkboxs as a group or individually with a different name?
    I am also considering setting up my tables differently so there is a table from each seminar--that may solve my problem.

  • Sequence contains more than one element error in MVC 5

    I created some models, added the migration and then did an update database operation, though at my last update database operation I got the error message saying:
        Sequence contains more than one element
    Below you can find my migration configuration:
            context.Categories.AddOrUpdate(p => p.CategoryName,
                new Category
                    CategoryName = "Sport"
                new Category
                    CategoryName = "Music"
            context.Subcategories.AddOrUpdate(p => p.SubcategoryName,
                new Subcategory
                    SubcategoryName = "Football"
                new Subcategory
                    SubcategoryName = "Basketball"
                new Subcategory
                    SubcategoryName = "Piano"
                new Subcategory
                    SubcategoryName = "Violin"
            context.Services.AddOrUpdate(p => p.ServiceType,
                new Service
                    ServiceType = "Football player",
                    Category = { CategoryName = "Sport" },
                    Subcategory = { SubcategoryName = "Football" }
                new Service 
                    ServiceType = "Piano lessons",
                    Category = { CategoryName = "Music" },
                    Subcategory = { SubcategoryName = "Piano" }
    The problem occurs with when I add new Services. I already have categories and subcategories, and if I do like Category = new Category { CategoryName = "Music" } then it works but I get Music entry twice in my database (for this example). I want to
    use the already added categories and subcategories. Below also you can find my models definitions.
    public class Category
        [Key]
        public int CategoryID { get; set; }
        public string CategoryName { get; set; }
    // Subcategory is defined the same way...
    public class Service
        public int ServiceID { get; set; }
        public string ServiceType { get; set; }
        public virtual Category Category { get; set; }
        public virtual Subcategory Subcategory { get; set; }
    }

    After reading the article in the link that you have provided, I did the following changes in my models, and created controllers for each of them using Entity Framework, then I created a migration and named it InitialServices. Afterwards, I added a few
    entries in my Configuration.cs file and when I typed Update-Database, I got an error message in package manager saying "RenameIndexOperation", which is marked with red. Below you can find my changed models and my Configuration.cs file, along with the migration
    file created automatically.
    Category.cs:
        public class Category
            [Key]
            public int CategoryID { get; set; }
            public string CategoryName { get; set; }
            public virtual ICollection<Subcategory> Subcategories { get; set; }
    Subcategory.cs:
        public class Subcategory
            [Key]
            public int SubcategoryID { get; set; }
            public string SubcategoryName { get; set; }
            [ForeignKey("Category")]
            public int CategoryID { get; set; }
            public virtual Category Category { get; set; }
            public virtual ICollection<Service> Services { get; set; }
    Service.cs:
        public class Service
            [Key]
            public int ServiceID { get; set; }
            [Required]
            [Display(Name="Service type")]
            public string ServiceType { get; set; }
            [ForeignKey("Subcategory")]
            public int SubcategoryID { get; set; }
            public int Count { get; set; }
            public virtual Subcategory Subcategory { get; set; }
    _InitialServices.cs:
        public partial class InitialServices : DbMigration
            public override void Up()
                DropForeignKey("dbo.Services", "Category_CategoryID", "dbo.Categories");
                DropIndex("dbo.Services", new[] { "Category_CategoryID" });
                RenameColumn(table: "dbo.Services", name: "Subcategory_SubcategoryID", newName: "SubcategoryID");
                RenameIndex(table: "dbo.Services", name: "IX_Subcategory_SubcategoryID", newName: "IX_SubcategoryID");
                AddColumn("dbo.Subcategories", "CategoryID", c => c.Int(nullable: false));
                CreateIndex("dbo.Subcategories", "CategoryID");
                AddForeignKey("dbo.Subcategories", "CategoryID", "dbo.Categories", "CategoryID", cascadeDelete: true);
                DropColumn("dbo.Services", "Category_CategoryID");
            public override void Down()
                AddColumn("dbo.Services", "Category_CategoryID", c => c.Int(nullable: false));
                DropForeignKey("dbo.Subcategories", "CategoryID", "dbo.Categories");
                DropIndex("dbo.Subcategories", new[] { "CategoryID" });
                DropColumn("dbo.Subcategories", "CategoryID");
                RenameIndex(table: "dbo.Services", name: "IX_SubcategoryID", newName: "IX_Subcategory_SubcategoryID");
                RenameColumn(table: "dbo.Services", name: "SubcategoryID", newName: "Subcategory_SubcategoryID");
                CreateIndex("dbo.Services", "Category_CategoryID");
                AddForeignKey("dbo.Services", "Category_CategoryID", "dbo.Categories", "CategoryID", cascadeDelete: true);
    Configuration.cs:
    protected override void Seed(Workfly.Models.ApplicationDbContext context)
                var categories = new List<Category>
                    new Category { CategoryName = "Sport" },
                    new Category { CategoryName = "Music" }
                categories.ForEach(c => context.Categories.AddOrUpdate(p => p.CategoryName, c));
                context.SaveChanges();
                var subcategories = new List<Subcategory>
                    new Subcategory { SubcategoryName = "Football", CategoryID = categories.Single(c => c.CategoryName == "Sport").CategoryID },
                    new Subcategory { SubcategoryName = "Basketball", CategoryID = categories.Single(c => c.CategoryName == "Sport").CategoryID },
                    new Subcategory { SubcategoryName = "Piano", CategoryID = categories.Single(c => c.CategoryName == "Music").CategoryID },
                    new Subcategory { SubcategoryName = "Violin", CategoryID = categories.Single(c => c.CategoryName == "Music").CategoryID }
                foreach (Subcategory s in subcategories)
                    var subcategoriesInDB = context.Subcategories.Where(c => c.Category.CategoryID == s.CategoryID).SingleOrDefault();
                    if (subcategoriesInDB == null)
                        context.Subcategories.Add(s);
                context.SaveChanges();
                var services = new List<Service>
                    new Service { ServiceType = "Football coach", SubcategoryID = subcategories.Single(s => s.SubcategoryName == "Football").SubcategoryID },
                    new Service { ServiceType = "Piano lessons", SubcategoryID = subcategories.Single(s => s.SubcategoryName == "Music").SubcategoryID }
                foreach (Service s in services)
                    var servicesInDB = context.Services.Where(t => t.Subcategory.SubcategoryID == s.SubcategoryID).SingleOrDefault();
                    if (servicesInDB == null)
                        context.Services.Add(s);
                context.SaveChanges();
            }

  • Error while running spatial queries on a table with more than one geometry.

    Hello,
    I'm using GeoServer with Oracle Spatial database, and this is a second time I run into some problems because we use tables with more than one geometry.
    When GeoServer renders objects with more than one geometry on the map, it creates a query where it asks for objects which one of the two geometries interacts with the query window. This type of query always fails with "End of TNS data channel" error.
    We are running Oracle Standard 11.1.0.7.0.
    Here is a small script to demonstrate the error. Could anyone confirm that they also have this type of error? Or suggest a fix?
    What this script does:
    1. Create table object1 with two geometry columns, geom1, geom2.
    2. Create metadata (projected coordinate system).
    3. Insert a row.
    4. Create spacial indices on both columns.
    5. Run a SDO_RELATE query on one column. Everything is fine.
    6. Run a SDO_RELATE query on both columns. ERROR: "End of TNS data channel"
    7. Clean.
    CREATE TABLE object1
    id NUMBER PRIMARY KEY,
    geom1 SDO_GEOMETRY,
    geom2 SDO_GEOMETRY
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM1',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM2',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO object1 VALUES(1, SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(550000, 450000, NULL), NULL, NULL));
    CREATE INDEX object1_geom1_sidx ON object1(geom1) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    CREATE INDEX object1_geom2_sidx ON object1(geom2) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE' OR
    SDO_RELATE("GEOM2", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'OBJECT1';
    DROP INDEX object1_geom1_sidx;
    DROP INDEX object1_geom2_sidx;
    DROP TABLE object1;
    Thanks for help.

    This error appears in GeoServer and SQLPLUS.
    I have set up a completly new database installation to test this error and everything works fine. I tried it again on the previous database but I still get the same error. I also tried to restart the database, but with no luck, the error is still there. I geuss something is wrong with the database installation.
    Anyone knows what could cause an error like this "End of TNS data channel"?

  • The "Measures" dimension contains more than one hierarchy... Collation issue

    It appears that an Excel query pased through to SSAS has a "measures" with lowercase "m" when analysis services expects an uppercase "M" so it should look like "Measures". Is there a fix in excel to allow
    the correct passing of "Measures" member name to the cube?
    BTW, I have NO Calculations in the cube.
    In excel 2013 when I pivot with a pivot table connected to a case sensitive collation (non default config)
    cube and perform a filter by "Keep only Selected Items" I get the error "The 'Measures' dimension contains more than one hierarchy, therefore the hierarchy must be explicity specified".
    When I revert back to server wide setting to case insensitive, and I preform the exact same pivoting function it works without error. The problem appears to be that excel does not understand the server collation setting.
    When I run SQL Server Profilier I narrowed down the MDX statement run in Excel that gives me an error to this:
    with
    member measures.__XlItemPath as
    Generate(
    Ascendants([Employee].[Location Code].currentmember),
    [Employee].[Location Code].currentmember.unique_name,
    "|__XLPATHSEP__|"
    member measures.__XlSiblingCount as
    Generate(
    Ascendants([Employee].[Location Code].currentmember),
    AddCalculatedMembers([Employee].[Location Code].currentmember.siblings).count,
    "|__XLPATHSEP__|"
    member measures.__XlChildCount as
    AddCalculatedMembers([Employee].[Location Code].currentmember.children).count
    select { measures.__XlItemPath, measures.__XlSiblingCount, measures.__XlChildCount } on columns,
    [Employee].[Location Code].&[01W]
    dimension properties MEMBER_TYPE
    on rows
    from [Metrics]
    cell properties value
    Playing around with the query I discovered that if I capitalize the first letter of the "with measures" member, the statement works.
    with
    member Measures.__XlItemPath as
    Generate(
    Ascendants([Employee].[Location Code].currentmember),
    [Employee].[Location Code].currentmember.unique_name,
    "|__XLPATHSEP__|"
    member Measures.__XlSiblingCount as
    Generate(
    Ascendants([Employee].[Location Code].currentmember),
    AddCalculatedMembers([Employee].[Location Code].currentmember.siblings).count,
    "|__XLPATHSEP__|"
    member Measures.__XlChildCount as
    AddCalculatedMembers([Employee].[Location Code].currentmember.children).count
    select { measures.__XlItemPath, measures.__XlSiblingCount, measures.__XlChildCount } on columns,
    [Employee].[Location Code].&[01W]
    dimension properties MEMBER_TYPE
    on rows
    from [Metrics]
    cell properties value
    Also, I realise that I could change the collation on just the cube itself to case insenstive to get this to work, but I really don't want to do an impact analysis of running a mixed collation environment.
    So, my question is: Is there an excel fix that will allow me to run a case sensitve cube and allow me to click on filter and filter by "keep only selected items" or "Hide selected Items"? All other filtering works, it's only those two
    filtering options error for me.
    Here are the versions I'm working with:
    Excel 2013 (15.0.4535.1507) MSO(15.0.4551.1007) 32-bit Part of Microsoft Office Professional Plus 2013
    Microsoft Analysis Server Enterprise 2012 11.0.3000.0
    Any help would be appreciated. Thank you in advance!

    Hi, i assume this logic is for Dimension formula?
    If you have multiple hierarchy like ParentH1 and ParentH2 you should use FormulaH1 and FormulaH2 and not FORMULA column.
    in FORMULAH1
    [Account.H1].[Account_A] / [Account.H1].[Account_B]

Maybe you are looking for

  • Calling a web service from a portal application / SOAP Action

    Hallo, Ich möchte aus einer Portal Applikation auf einen Web Service zugreifen. Dafür habe ich mit dem Wizard "Portal Service from Wsdl file - Client side" eine Portal Service erzeugt, auf den ich dann zugreife. Leider wird dabei ein Fehler ausgegebe

  • How do I access a spreadsheet on my PC from Numbers on my iPad

    I have iOS8 on my iPad (and iPhone) and iCloud Drive activated and have installed iCloud Drive on my PC and it has created the associated iCloud Drive folder. I have put a spreadsheet into the PC folder but cannot see how I can now access that spread

  • I want to use the 3G in Egypt but I got the iPad 2 Verizon, what can I do?

    I can't even find the sim tray... I had to buy the at&t one?! Is the Verizon one without a sim tray?!

  • Reading back file for analysis purposes controlling X-axis

    Alright, I lose using labview.  I use it for a ton of different projects at work.  I have been fooling around with this for a while and maybe I am just missing it. Here goes, in one of my applications I record 4 AI channels at various rates (6, 600,

  • MRP Date in MRP List

    Hello All, While accessing the MRP List (collective access MD06). There is a selection option for MRP Date. SAP Help tells that "Date: for example, all MRP lists that were created or processed within the last two weeks". But I could not understand wh