Ecommerce site - how do I create several product pages using only one table of data?

Hi
Im designing an ecommerce clothing site for my assignment using an access database.
When I create the Data Set it takes all the information from my Acess table "Products" , creating one huge list of products on a single web page.
However, i would like to have the product data split across many web pages depending on there specific Category given [i.e. T-shirts, Jumpers, Trousers, Bags, Acessories].
Does anybody know how to do this? I think it involves setting certain Parameters, or writing a SQL statement in Dataset.
Thanks for any help
Amy

Hi Amy,
Here's a series of articles I did for Adobe Developer's Center, included is an example of a shopping cart app you can use as a staring point or an example if you like.
http://www.adobe.com/devnet/dreamweaver/articles/build_shopping_cart.html
http://www.adobe.com/devnet/dreamweaver/articles/build_shopping_cart_pt2.html
http://www.adobe.com/devnet/dreamweaver/articles/build_shopping_cart_pt3.html
Lawrence   *Adobe Community Expert*
www.Cartweaver.com
Complete Shopping Cart Application for
Dreamweaver, available in ASP, PHP and CF

Similar Messages

  • How do i confirm whether it is using only one partition.

    I want to select the data from one partition only.
    How do i confirm my select query using only one partition.
    To front end developers how do i suugest to use correct partition.
    I know in the select query we can write partion name also if i know the partition.
    But i don't the correct partition name as a front end developer.

    Hello,
    You can find partitions for the table from following view user_tab_partitions. You can use PARTITION Clause but it defeats the purpose of having partition key column,
    You should be able to do this
    select * from my_part_table where part_key='key';Regards

  • How do you create a login page using dashcode for the iPhone's mobile safari that will transfer you to the next page?

         Hey I have created a login page in Dashcode for a mobile safari app (aka iphone web app) and I am having trouble since I can not find any useful info about multiple pages. I don't want to use a stack layout view because it is only a login page and I need to check with a database to make sure the user's login info is correct. Right now I have it set up so that it loads another iphone web app project once it validates the info.
         The only problem with this is I am having a good bit of trouble trying to pass values from my php code to javascript or html. For some reason calling the javascript inside the php code makes the actual code inside the app not be called and same with the echo statment for the html.
    So I would like to be able to create the app in this way:
    Login page > PHP > MS SQL > PHP > UNKNOWN (if I can't get the javascript or html to output both) > Secure info on the next page
         I believe It would be a lot easier if I had the option for a multiple page but instead I am having to load up an entirely new dashcode project. If anyone knows a better way please let me know. Or if anyone knows a link to good information on passing values from php to javascript, because I couldn't get any of the code I tried to work, I would really appreciate it.

    Addendum to previous reply:
    OK.  This is weird--but I should be used to that, and just grateful that it seems to work (for now).
    What I had done is FTPd some image files to my site using Filezilla, but when I had tried to access them, I was unsuccessful.  I am almost sure that I used the same url (and variations of it) as you suggested, namely,  http://mysite.verizon.net/username/filename , and it either did not work, or gave me the "Page under construction", or, in some cases asked me for my username and password.
    But, when I did it this time, it worked.  So I probably had something off, but I can now do what I want.
    By the way, if you'll permit another question, while on the site-builder site, it said that there was a "Web Photo Manager", and said that "To download the Web Photo Manager: Open the Site Builder application and go to the All My Sites page. Click on the Web Photo Manager link (listed under Advanced Building Tools )."  I can't find it--would you happen to know where it is?
    In any case, thanks a lot for all your help--it solved my problem. 

  • How do I create an exe file using Sun One?

    Hello everyone,
    I need help in creating an executable file in Sun One, can someone please help?

    Hi all I search the forum and came up with some suggestions so I'll run with these until I exaust all of them!!! thanks anyways

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

  • How can i devlope three JSPX pages & having only one backing bean.

    Hello Team,
    I am working on Jdevloper11g
    I have one form registration in this from user enters the self information in first screen, insurance information in second screen, employment information in third screen
    So I developed the following scenario
    I create one JSPX page & in this page I taken 3 panelGroupLayout n I simply rendering the panelGropuLayout that time its very essay stuff.......
    But now I have to implement the af:train component in that I have to implement above scenario in three JSPX pages & I want only one backing bean for this three JSPX pages.
    SelfInformation.jspx ...... SelfInformation.java
    InsuranceInformation.jspx
    EmploymentInformation.jspx
    I done this too…
    But now I want when
    I adding some components on second screen ex af:inputText its backing bean create in SelfInformation.java
    Is this possible?

    HI..
    I created three jspx pages...& i drag n drop af:train & af:trainButtonBar camponant.... by default Back & next button is created...But my requirment is...Only on first form i dont want to show Back Button...
    How can i do this?
    Edited by: Charu on Nov 22, 2009 9:42 PM

  • How to generate a delayed retriggerable pulse using only one counter with PXI 6070E card

    Hi
    I have a problem in generating a retriggerable delayed pulse with a single counter(triggered through a signal at gate) using PXI 6070E card. VI was developed in NI LabVIEW traditional DAQ Ver.7.1. I have used the "delayed pulse generator config" VI and a "Start counter" & "Stop counter" VIs for the purpose. But there is no output seen at the out terminal of counter. So I introduced a "wait" VI and set it to 1 sec. Now the pulse output appears but some pulses are missing mometarily after every 1 sec interval. (any solution for this)
    I have gone through a few similar requests in the forum but they suggest either to use two counters or to generate a finite pulse train which does'nt fit my application. Moreover PXI 6070E has only 2 counter timers. I am already using one counter to measure the frequency of a pulse train(signal 1). The application requires to generate a delayed retriggerable pulse for every pulse in signal 1. So I have only one counter left.
    Can I measure the frequency of signal 1 by analog means.? so that I can use two counters for pulse generation. (Signal 1 is a TTL signal).
    Request some help.
    Thanks in Advance
    Regards

    A finite pulse train (N_Pulses >= 2) does require the use of 2 counters on most of our older hardware including your 6070E.  If you're just talking about generating a single retriggerable pulse, you would only need one counter.
    Here's an example in Traditional DAQ that shows you how to set a retriggerable pulse generation (it also allows you to adjust the characteristics of the pulse on-the-fly).
    If you're writing a new program, you might consider switching to DAQmx as it supports NI's latest hardware and recent OSes should you ever need to upgrade.  Traditional NI DAQ is no longer in active development.  Here's an example of how to implement a retriggerable pulse generation in DAQmx.  You should take note that you can't use the two drivers to simultaneously talk to the same piece of hardware, although you should be able to go back and forth by resetting the Traditional DAQ driver before switching to DAQmx.
    Best Regards,
    John Passiak

  • How do I create a web page using Dreamweaver

    I need help to create a website in Dream Weaver.  Where do I begin?

    Start with HTML & CSS code.  Those are the building blocks for everything you will be doing in DW.
    http://www.html.net/
    http://w3schools.com/
    http://www.csstutorial.net/
    http://phrogz.net/css/HowToDevelopWithCSS.html
    http://webdesign.tutsplus.com/sessions/web-design-theory/
    Once you've grasped the above fundamentals, work through this 5-part tutorial:
    Creating your first web site in DW CC-
    https://helpx.adobe.com/dreamweaver/learn/tutorials/how-to/first-website-part1.html
    Nancy O.

  • How can i create Level Based Hierarchy using multiple logical table.

    Hi All,
    in my use case i have two master logical table.
    Division_Mst(divncd,DivnName)
    Department_Mst(deptcd,DeptName)
    and one logical table which contain both column Header (divncd,deptcd)
    and fact table detail where one measure column(empcd) using aggregation count on it.
    so i want to create levelbased hierarchy like this.
    DivnName
              |------DeptName
    which are come from(Division_Mst,Department_Mst) .
    so user can see no of employees division wise and drill down to Department wise.
    How can i achieved this?
    thanks
    Manish

    Check out Shay's blog https://blogs.oracle.com/shay/entry/adf_faces_dynamic_tags_-fora
    Timo

  • How can I manage several itunes accounts on only one iPhone4S?

    I don't want to get illegaly some applications from my friends or collegues itunes account.
    In fact, my company decided to give me an iPhone4S for a professional usage and it is strongly recommended to not use my personal itunes account.
    (I had already an iPhone 4 for personal usage)
    So I created a new iTunes account, dedicated to professional usage. Now, I need to download on my professional iPhone business applications that I have bought with my personal itunes account.
    I hope that it's a common problem with iPhone in the enterprise and that somebody has found the solution...
    Thanks for your help. 

    This is a good opportunity to have a User Account for each of your family members.  And each User Account can login iTunes with their desired Apple ID.
    You can set this up from the Apple Logo > System Preferences > Users & Groups. 

  • How do I create envelopes in Pages using Numbers as the source?

    I've done this successfully before back in 2011 when I was sending out Holiday Cards.  Now I go in there today looking to use the same envelope template and an updated (not everybody lives in the same location) Numbers spreadsheet and I keep getting..."Please select a numbers document that has one or more named header columns and one or more rows of data".  The numbers file has 6 header columns (FN, LN, Address, City, State & Zip) and 77 rows of data.  I've tried to recreate the file under a different name, I've deleted and reinserted the header columns, etc and nothing has worked.  I'm now at the point where I'm toying with the idea of pulling the plug on the Holiday Card all together unless somebody could save the day.  PS...their is a holiday card in it for you
    Please help..

    Fair enough...it sounds like the Holiday cards will be put on pause until 2014. 
    On a different note, I'm noticing all sorts of issues with the OS...are you finding that Mavericks is less stable than prior versions of the OS?  The entire reason I went away from Windows based PC's was because of the instability of the OS and now I'm finding that the stability of the Apple OS is starting to follow the same path. 
    Thoughts?

  • Can XSQL create multiple session variables using only one database call?

    Right now if I want to set session variables for username and accesslevel, I code out like this:
    <xsql:set-session-param name="name" bind-params="username password">
    SELECT DISTINCT USERNAME
    FROM LKUP_USER
    WHERE USERNAME = ? AND PASSWORD = ? AND ACCESSLEVEL = 0
    </xsql:set-session-param>
    <xsql:set-session-param name="authlvl" bind-params="username password">
    SELECT DISTINCT ACCESSLEVEL
    FROM LKUP_USER
    WHERE USERNAME = ? AND PASSWORD = ? AND ACCESSLEVEL = 0
    </xsql:set-session-param>Is there any way to do it so that I don't have to do multiple queries to the database to set session variables? i.e., something like this:
    <xsql:set-multiple-session-param name="user authlvl" bind-params="username password">
    SELECT DISTINCT USERNAME,
    ACCESSLEVEL
    FROM LKUP_USER
    WHERE USERNAME = ? AND PASSWORD = ? AND ACCESSLEVEL = 0
    </xsql:set-multiple-session-param>Sort of like how bind-params works. Setting bind-params="username password" makes the first ? akin to username and the next ? akin to password.
    Is this functionality already in existence?
    Thanks!
    Malik Graves-Pryor

    Not currently possible to collapse into one request without doing it in a custom action handler.
    A custom action handler can:
    [list=1]
    [*]Get the current JDBC connection from the XSQLPageRequest
    [*]Get the SQL statement to perform using the function getActionElementContent
    [*]Handle any bind parameters specified an a bind-params attribute on the action element by calling handleBindVariables()
    [*]Execute and fetch the row from the query
    [*]Check to see that the return value of getPageRequest().getRequestType() equals the value "Servlet"
    [*]Cast the page request to an XSQLServletPageRequest and call getHttpServletRequest()
    [*]Call getSession() on the request
    [*]Set the session variables you want to
    [*]Close the JDBC statement
    [list]
    will consider a built-in enhancement for a future XSQL release.

  • Create user with select privilege only one schema

    can someone tell me how i can create user with select priviliges only one schema.
    i don't want the user to have any select privileges with other schema.
    can someone advise me.
    Thansk

    In general, you would do something like
    CREATE ROLE abc_read_only;
    FOR x IN (SELECT * FROM dba_tables WHERE owner='ABC')
    LOOP
      EXECUTE IMMEDIATE 'GRANT SELECT ON abc.' || x.table_name || ' TO abc_read_only';
    END LOOP;
    CREATE USER your_user ...;
    GRANT abc_read_only TO your_userYou create a role, grant the role SELECT access to all the tables in the ABC schema (you can extend this to grant access to views, functions, etc depending on the requirements), and then grant that role to your user.
    Justin

  • How can i creat several rectangles with one draw rect.vi

    how can i creat several rectangles with one draw rect.vi? thanks
    Solved!
    Go to Solution.

    You can call it in a for loop, with an array of the rectangle coordinates you want to draw. Is this what you mean?
    CLA, LabVIEW Versions 2010-2013
    Attachments:
    rectangle.png ‏11 KB

  • How do I create a product or item database so I can search fr the product and its files with hte pro

    How do I create a product or item database so I can search for the product and its files with the products name or four digit code

    Ok so I made some progress on this. I have figured out, that I can add a chained "add to cart" to certain items, then when they click the button for Buy now, it will add both items to the cart. However, this would require me to manually build each product page and generate a custom button for each one with both product IDs in it.
    Can anyone offer help on how to put some JS in that would append a second function to the onclick function that BC Generates dynamically?
    For the products that require a set up fee, I would assume I would add in to either the product templates or in to the item description some JS that would find the onclick of the buynow button and append a second function to also add to cart the setup fee product. The end result being code that looks like this:
    <input type="submit" class="productSubmitInput" onclick="AddToCart(188536,6314368,'',4,'','',true);AddToCart(188536,6314367,'',4,'','',tr ue);return false;" value="Buy Now" name="AddToCart_Submit" />
    Except of course, the product ID on the first AddToCart would be the main product with the second one being the one appended.
    Does any of that make sense? lol

Maybe you are looking for