Custom Design rules on table partitions

Hi
I need to create several custom design rule at the table partition level.
for example one of the rule is that
for all table partitions
  if a table partition name begins with M
    then it should not be compressed
    and also should not be in tablespace called xyzHow do i go about enforcing this rule using the design rules

Hi,
here is simple example, you can improve it easily. In fact you have two rules and it's better to create two rules.
var ruleMessage;
var errType;
var table;
//define the function
function checkPartitions(){
ruleMessage = "";
model = table.getDesignPart();
tp = model.getStorageDesign().getStorageObject(table.getObjectID());
result = true;
if(tp!=null){
  partitions = tp.getPartitions().toArray();
  for(var i=0;i<partitions.length;i++){
   partition = partitions;
if(partition.getName().startsWith("M") && "YES".equals(partition.getDataSegmentCompression())){
result = false;
ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be compressed";
break;
tablespace = partition.getTableSpace();
if(tablespace!=null && "xyz".equals(tablespace.getName()) && partition.getName().startsWith("M")){
result = false;
ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be in tablespace xyz";
break;
return result;
//call the function
checkPartitions();
you should define it for "Table" object. And your physical model should be open.
Philip
Edited by: Philip Stoyanov on Jan 10, 2012 4:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Custom Design Rules Error: importClass is not defined

    Hi everybody
    i have a problem with the Data Modeler 4.1 (Beta) custom design rules.
    in the version 4.0.3 i have some custom design rules with the option: importClass(javax.swing.JOptionPane);
    for the error exception and messageboxes.
    - in the version 4.0.3, this option works just fine
    - in version 4.1, this option raises an error
    If this option is changed in Version 4.1? or where is the error?
    Thanx for any idea, suggestion...
    Martin

    I have found the solution!
    in the version 4.0.3 the Data Modeler used Mozilla Rhino as engine and in the version 4.1 Oracle Nashorn.
    So in version 4.1 you must load compatibility script for use the ImportClass functionality.
    Example:
    // Load compatibility script
    load("nashorn:mozilla_compat.js");
    //Import der Java Class
    importClass(javax.swing.JOptionPane);

  • Design capture of hash partitioned tables

    Hi,
    Designer version 9.0.2.94.11
    I am trying to capture from a server model where the tables are hash partitioned. But this errors because Designer only knows about range partitions. Does anyone know how I can get Designer to capture these tables and their constraints?
    Thanks
    Pete

    Pete,
    I have tried all three "current" Designer clients 6i, 9i, and 10g, at the "current" revision of the repository (I can post details if interested). I have trawled the net for instances of this too, there are many.
    As stated by Sue, the Designer product model does not support this functionality (details can be found on ORACLE Metalink under [Bug No. 1484454] if you have access), if not, see excerpt below. It appears that at the moment ORACLE have no urgent plans to change this (the excerpt is dated as raised in 2001 and last updated in May 2004).
    Composite partitioning and List partitioning are equally affected.
    >>>>> ORACLE excerpt details STARTS >>>>>
    CDS-18014 Error: Table Partition 'P1' has a null String parameter
    'valueLessThan' in file ..\cddo\cddotp.cpp function
    cddotp_table_partition::cddotp_table_partition and line 122
    *** 03/02/01 01:16 am ***
    *** 06/19/01 03:49 am *** (CHG: Pri->2)
    *** 06/19/01 03:49 am ***
    Publishing bug, and upping priority - user is stuck hitting this issue.
    *** 09/27/01 04:23 pm *** (CHG: FixBy->9.0.2?)
    *** 10/03/01 08:30 am *** (CHG: FixBy->9.1)
    *** 10/03/01 08:30 am ***
    This should be considered seriously when looking at ERs we should be able to
    do this
    *** 05/01/02 04:37 pm ***
    *** 05/02/02 11:44 am ***
    I have reproduced this problem in 6.5.82.2.
    *** 05/02/02 11:45 am *** ESCALATION -> WAITING
    *** 05/20/02 07:38 am ***
    *** 05/20/02 07:38 am *** ESCALATED
    *** 05/28/02 11:24 pm *** (CHG: FixBy->9.0.3)
    *** 05/30/02 06:23 am ***
    Hash partitioning is not modelled in repository and to do so would require a
    major model change. This is not feasible at the moment but I am leaving this
    open as an enhancement request because it is a much requested facility.
    Although we can't implement this I think we should try to detect 'partition by
    hash', output a warning message that it is not supported and then ignore it.
    At least then capture can continue. If this is possible, it should be tested
    and the status re-set to '15'
    *** 05/30/02 06:23 am *** (CHG: FixBy->9.1)
    *** 06/06/02 02:16 am *** (CHG: Sta->15)
    *** 06/06/02 02:16 am RESPONSE ***
    It was not possible to ignore the HASH and continue processing without a
    considerable amount of work so we have not made any changes. The existing
    ERROR message highlights that the problem is with the partition. To enable
    the capture to continue the HASH clause must be removed from the file.
    *** 06/10/02 08:32 am *** ESCALATION -> CLOSED
    *** 06/10/02 09:34 am RESPONSE ***
    *** 06/12/02 06:17 pm RESPONSE ***
    *** 08/14/02 06:07 am *** (CHG: FixBy->10)
    *** 01/16/03 10:05 am *** (CHG: Asg->NEW OWNER)
    *** 02/13/03 06:02 am RESPONSE ***
    *** 05/04/04 05:58 am RESPONSE ***
    *** 05/04/04 07:15 am *** (CHG: Sta->97)
    *** 05/04/04 07:15 am RESPONSE ***
    <<<<< ORACLE excerpt details ENDS <<<<<
    I (like I'm sure many of us) have an urgent immediate need for this sort of functionality, and have therefore resolved to looking at some form of post process to produce the required output.
    I imagine that it will be necessary to flag the Designer meta-data content and then manipulate the generator output once it's done its "raw" generation as a RANGE partition stuff (probably by using the VALUE_LESS_THAN field as its mandatory, and meaningless for HASH partitions!).
    An alternative would be to write an API level generator for this using the same flag, probably using PL/SQL.
    If you have (or anyone else has) any ideas on this, then I'd be happy to share them to see what we can cobble together in the absence of an ORACLE interface to their own product.
    Peter

  • Creating table partitions via Common Format Designer

    I am looking for a way to create table partitions via the Common Format Designer in my Models.
    As far as I see this is not something that ODI can handle with the out of the box install.
    Is this sth that can be added as part of an action or similar?
    thanks
    uli

    Hi Uli,
    Partitions are not yet defined in the ODI metadata, you could add a step in the DDL procedure generated by CFD that would handle the creation of the partition.
    Thanks,
    Julien

  • Introduction of Oracle Table Partitions into PeopleSoft HRMS environment

    I would like to pose a general question and see if anyone has found any published advice / suggestions from PeopleSoft or Oracle on this. I believe that Oracle table partitioning isn't supported through the PeopleTools application designer functionality. Most likely this is done for platform independence. However, we were thinking about implementing table partitioning for performance and the ability to refresh test instances with subsets worth of data instead of the entire database.
    I know that this would be a substantial effort, but was wondering if anyone had any documentation on this type of implemention. I've read some articles from David Kurtz on the subject, and it sounds like these were all custom jobs for each individual client. Was looking for something more generic on this practice from PeopleSoft or Oracle...
    Regards,
    Jay

    Thanks for the article Nicolas. I will add that to my collection, good reference piece.
    I think you gasped the gist of the query, which was I know that putting partitioning into a PeopleSoft application is going to be highly specific to the client and application that you are running. But what I looking for was something like a baseline guide for implementing partitioning in a PeopleSoft application as a whole.
    In other words, something like notifiaction that the application designer panels would be affected since they don't have the ability to manage partitions. Therefore, any changes to tables that would utilize partitioning would need to be maintained at the database level and no longer utilize the DDL generated from PeopleTools Application Designer. Other consideration would be, like maybe a list of tables that would be candidates for partitioning based on the application, in my case HRMS. And maybe, suggestions on what column should be used for partitioning, etc...All of which are touched on in your identified article about putting partitioning in at the database level for a generic application.
    Thanks for your help, it is much appreciated...
    Jay

  • Vertical table partition?

    Hi,
    our database contains XML documents in 11 languages, one table for each language. Besides, there is meta information about each document which is the same for each language. Therefore, this meta information has been placed in a separate table.
    Now to the problem: common database queries search in the documents (Intermedia) as well
    in the meta data, what means that there is a join between the table that contains the documents (columns: key, documenttext) and the table that contains the meta information (columns: key and 18 others). Unfortunately after having located documents via Intermedia index, Oracle reads the corresponding data blocks for extracting the key. Because the documents are up to 4K, Oracle reads much information that it does not need. Is it possible to have a vertical partition of the documents table in order to be able to efficiently read only the key information instead of the whole table rows?
    Thank You for any hint,
    Markus

    Thanks For your comment and sorry for delay my Internet got down yesterday. Well, As you said throw away existing table structure.. i agree with you but there is a limitation right now. Our application only understand the current structure. Any change in database result to change execution engine. Actually, Our application provide interface to do Ad hoc reporting. we certainly use dimensional modeling but unfortunately our CTO believe existing structure is much better than the dimensional model. it gives the flexibility  to load any data like retail, click stream etc and provide reporting on data. Although, the architecture is bad but they we providing analytic to customer....
    To cut the long story short, they have assigned me a task to improve its performance. i have already improve it performance by using different feature of oracle like table partitioning sub partitioning, indexing and many more... but we want more... i have observed during the data loading process the Update part took much time. if i could do this any other way it would reduced the overall data loading time.. we extract the file from client server and perform transformation according to  business rule and out the result of transformation file in flat file.. finally our application use sql loader to load in flat table then divide it in different table. Some of our existing client have 250 column some are 350 plus...
    i think i have explained much but if you have more question on it i will ready to answer them. Please suggest with your experience how could i improve performance of existing system.

  • Create index partition in the table partition tablespace

    Hello,
    I am running a custom job that
    * Creates a tablespace daily
    * Creates the daily table partition in the tablespace created
    * Drops the tablepartition X days old
    * Drops the tablespace for that partition on X+1 day.
    The above job runs perfectly, but 'm having issues with managing the indexes for these partitioned tables. In the old database (10g - Single Node), all the partitions/indexes existed in one BIG tablespace and when I imported the table creation script into the new database, I modified all the table partitions & indexes to go into their respective tablespace.
    Eg:
    Table_name........Partition_name.....................Index_Part_name..........................Tablespace_name
    ============...================............====================...........=================
    TABL1...................TABL1_2012_07_16............TABL1_IDX_2012_07_16............TBS_2012_07_16
    TABL1...................TABL1_2012_07_15............TABL1_IDX_2012_07_15............TBS_2012_07_15
    But now when the job runs, it creates the index into the default tablespace TBS_DATA.
    Table_name........Partition_name.....................Index_Part_name..........................Tablespace_name
    ============...================.............====================...........=================
    TABL1...................TABL1_2012_08_16............TABL1_IDX_2012_08_16............TBS_DATA
    TABL1...................TABL1_2012_08_15............TABL1_IDX_2012_08_15............TBS_DATA
    I can issue alter index rebuild to move the indexes to its default tablespace, but how can I ensure that the index gets created in its designated tablespace?
    NOTE - the partition/tablespace management job that I run only creates the table partition and not the index.
    The new env is a 2-Node 11gR2 RAC cluster on Linux x86_64.
    Thanks in advance,
    aBBy.

    Excerpt from the job -
    This creates the partition into the new tablespace.
    v_sql_new_part := 'alter table '||tab_owner||'.'||tab_name||' add partition '||v_new_part_nm||'
    values less than (to_date('''||v_new_part_dt_formatted||''',''DD-MON-YYYY'')) tablespace '||part_tbs;
    execute immediate v_sql_new_part;New tablespace for new partition - because this is a 10T database and having multiple tablespaces helps with backup/recovery.
    Thanks,
    aBBy.

  • Azure SQL Federations Retired - Will Table Partitions replace?

    My latest project was designed around Azure SQL Federations. Needless the frustration of the  this feature leaving is high, I can't imagine the people that have already coded their projects around it.
    With Azure SQL Federations gone, is table partitioning going to be in the road map soon?

    Hi Mendokusai,
    The features of SQL Server that are typically used for partitioning data in on-premises solutions include: Replication, Table Partitioning, Partitioned Views, Distributed Partitioned Views, File group Strategies, and Cross-database queries (Distributed Queries).
    Azure SQL Database does not support all these features except Partitioned Views. Azure SQL Database does offer a scale out solution, which horizontally partitions data across multiple databases.
    Azure SQL Database supports two methods to implement partitions, one is custom sharding, another is federations.
    The current implementation of Federations will be retired with Web and Business service tiers.  Consider deploying custom sharding solutions to maximize scalability, flexibility, and performance.
    For more information, you can review the following articles.
    http://msdn.microsoft.com/library/azure/dn495641.aspx
    http://msdn.microsoft.com/en-us/library/jj156171.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • How to disable a custom designed Tx code for multiple user at a time

    Hii ,
    I have designed a screen in module pool for end user to make entries in the screen and when he saves the data is saving in standard table and ztable. the main field in the screen is Batch number..from that batch  number bag number will be generated. and consumed quantity will be saved in that bag no.Bag number will be generated like first 5 digits of batch number and bag number series of that batch number. for example if batch number is 12345 and already 5 times packing is done for same batch..last bag number in the ztable will be 123450005.so next time when user tries to pack using same batch number the new bag number will be 123450006 for batch 12345.Problem here is when user tries to make enrties in that Tx code and at the same time if another user opens same Tx code to make packing for same batch both of them are getting same bag numbers before saving.
    I have called Enqueue and Dequeue FM's but still at a time for same batch user is able to do the packing.now my issue is i want to restrict 2 user to use same batch while packing in that Tx code.
    I have written following code for enqueue and dequeue technics
    data: B_matnr type mara-matnr,
           B_charg type mchb-charg.
    data : i_temp type TABLE OF zpackhdr WITH HEADER LINE,
           i_temp1 type TABLE OF zpackhdr WITH HEADER LINE.
    move : 1110 to WA_BCH-werks,
           chk_matnr1 to WA_BCH-matnr,
           v_bcharg to WA_BCH-charg,
           vgrade to WA_BCH-grade,
           new_batch to WA_BCH-bagno,
           m_baleno to WA_BCH-baleno,
           b_date to WA_BCH-indat.
    APPEND wa_bch to i_bch.
    clear b_date.
    READ TABLE i_bch INTO wa_bch INDEX 1.
        B_MATNR = WA_BCH-matnr.
        B_CHARG = WA_BCH-bagno.
    concatenate  B_matnr B_charg  into
        WA_BCH-objek respecting blanks .
       modify I_BCH from WA_BCH index sy-tabix.
    CLEAR: B_MATNR,
               B_CHARG.
    call function 'ENQUEUE_EMMCH1E'
    EXPORTING
       MODE_MCH1            = 'E'
       MANDT                = SY-MANDT
       MATNR                = WA_BCH-MATNR
       CHARG                = WA_BCH-BAGNO
    if sy-subrc <> 0.
    endif.
    call function 'DEQUEUE_EMMCH1E'
    EXPORTING
       MODE_MCH1       = 'E'
       MANDT           = SY-MANDT
       MATNR           = WA_BCH-MATNR
       CHARG           = WA_BCH-CHARG

    I do understand what u say...mine is a custom designed screen...when i open that screen i have around 15 input fields in which batch is obligatroy...when i give batch and hit enter all the other fields will be filled automatically picking from the table which are relevant for that batch..for example..material,order etc are picked from table...and bag number field will be generated taking first 5 digits of batch and followed by 0001 if its afirst time entry for that batch....so when a user is opening that screen in 2 different windows and giving details without saving any of the screens...in both screens bag number is generating as 001...and when saving it ..its saving 2 entries with same bag number...so i have created a lock entry for afpo table taking order field...so when a user opens 2 screens with same batch...and giving entries in those 2 screens without saving..he is getting same bag numbers as 001.....now when user saving the first screen and coming to second screen to save...he is gettimg message 'ORDER CURRENTLY BEING PROCESSED'..but after the data gettng saved in first screen,then when he saves the second screen it is getting saved...with same bag numbers as 001.so my issue is here...when he saves first screen and comes to second screen to save it the user should get that error message and should come out of the screen....so that he can make a fresh entry for that batch and bag number will be generated as 002 for that batch...
    Regards,
    venkat.

  • CC&B 2.3.1 - Custom indexes for base tables

    Hi,
    We are seeing a couple of statements in the database that could improve its performance with new custom indexes on base tables. Questions are:
    - can we create new indexes on base tables ?
    - is there any recommendations about naming, characteristics and location for this indexes ?
    - is there any additional step to do in CC&B in order to use the index (define metadata or ...) ?
    Thanks.
    Regards.

    Hi,
    if it necessary You can crate custom index.
    In this situation You should follow naming convention from Database Design Standards:
    Indexes
    Index names are composed of the following parts:
    +[X][C/M/T]NNN[P/S]+
    +•     X – letter X is used as a leading character of all base index names prior to Version 2.0.0. Now the first character of product owner flag value should be used instead of letter X. For client specific implementation index in Oracle, use CM.+
    +•     C/M/T – The second character can be either C or M or T. C is used for control tables (Admin tables). M is for the master tables. T is reserved for the transaction tables.+
    +•     NNN – A three-digit number that uniquely identifies the table on which the index is defined.+
    +•     P/S/C – P indicates that this index is the primary key index. S is used for indexes other than primary keys. Use C to indicate a client specific implementation index in DB2 implementation.+
    Some examples are:
    +•     XC001P0+
    +•     XT206S1+
    +•     XT206C2+
    +•     CM206S2+
    Warning!  Do not use index names in the application as the names can change due to unforeseeable reasons
    There is no additional metadata information for indexes in CI_MD* tables - because change of indexes does not influence generated Java code.
    Hope that helps.
    Regards,
    Bartlomiej

  • Table Partitioning in Oracle 9i

    Hi all,
    I have a question on partitioning in Oracle 9i.
    I have a parent table with primary key A1 and attribute A2. A2 is not a primary key but I would to create partition for the table based on this attribute. I have a child table with attribute B1 being a foreign key to A1.
    I wish to perform a data purging on the parent and child table. I'll purge the parent table based on A2, but for the child table, it will be inefficient if I delete all records in child table where parent.A1 = child.B1. Should I add a new attribute A2 to the child table, partition the child table based on this attribute or is there a better way to do it?
    Thanks in advance for all replies.
    Cheers,
    Bernard

    Bernard
    Right 100K in the parent...but how many in the child ?
    I guess it comes back to what I said earlier...you can either take the hit on the cascaded delete to get out the records on the child table or you can denormalise the column down onto the child table in order to partition by it.
    I'm building a Data Warehouse currently and we're using the denormalise approach on a couple of tables in order to allow them to be equipartitioned and enable easier partition management and DML operations as you've indicated....but our tables have 100's of millions of rows in them so we really need to do that for manageability.
    100K records in the parent - provided the ratio to the child is not such that on average each deleted parent has 100's of children is probably not too onerous, especially for a monthly batch process - the question there would be how much time do you have to do this at the end of the month ? I'd probably suggest you set up a quick test and benchmark it with say 10K records as a representative sample (can do all 100K if you have time/space) - then assess that load/time against your month end window....if its reasonably quick then no need to compromise your design.
    You should also consider whether the 100K is going to remain consistent over time or is it going to grow rapidly in which that would sway you towards adding the denormalisation for partitioning approach at the outset.
    HTH
    Jeff

  • Regarding adding a Custom field to Standard Table

    Hi ABAPers,
    Can any one explain the below spec-description.
    "The purpose of this design is to provide the foundation for a more automated solution to the invoice reconciliation process.  This design calls for adding a custom field to the standard SAP table EINE as well as a data maintenance tool for the same.  There will also be a new custom table for storing values associated with the new EINE field.  These new tables will also provide users with the ability to determine which PIR are soon to expire."
    We have to add one custom field to standard table EINE, how we can add this custom field to STND table.
    According to me we can add it through append structure. is it correct or not.
    and what is data maintenance tool.
    Pls.............Explain in details.
    Thanks in advance.
    Regards,
    Ramana Prasad. T

    Hi,
    Goto SE11 ,give ur table name.Then press display button.Then in the application tool bar press on append structure ...Now create a zstructure and add ur custom field and then activate the table.
    Regards,
    nagaraj

  • Foreign keys at the table partition level

    Anyone know how to create and / or disable a foreign key at the table partition level? I am using Oracle 11.1.0.7.0. Any help is greatly appreciated.

    Hmmm. I was under the impression that Oracle usually ignores indices on columns with mostly unique and semi-unique values and prefers to do full-table scans instead on the (questionable) theory that it takes almost as much time to find one or more semi-unique entries in an index with a billion unique values as it does to just scan through three billion fields. Though I tend to classify that design choice in the same category as Microsoft's design decision to start swapping ram out to virtual memory on a PC with a gig of ram and 400 megs of unused physical ram on the twisted theory that it's better to make the user wait while it needlessly thrashes the swapfile NOW than to risk being unable to do it later (apparently, a decision that has its roots in the 4-meg win3.1 era and somehow survived all the way to XP).

  • How to design static information table

    Hi gurus,
    I was trying to find a solution by googling etc but not sucessful.
    I am developing a mail letter report which will require a pretty word style table holding static informaiton. I can't find any to use HTML code. my current implementation is coping and pasting the MS Word table into OLE object. It looks ok in design view. But if I print the report, the result is not as good as with MS Word.
    Is there any way I can design static content table( this is not database table)?
    My CR version is 11.
    My Report: 1st page: customer details. some information such as names will be fetched from DB.
                     2nd page: static information (MS word style pretty table) for customers.
                     Both pages will be printed in one piece of paper (duplex).
    Thanks
    Oli
    Edited by: CRGuru on Jun 17, 2009 3:41 AM
    Edited by: CRGuru on Jun 17, 2009 3:43 AM
    Edited by: CRGuru on Jun 17, 2009 3:43 AM

    Please re-post if this is still an issue or purchase a case and have a dedicated support engineer work with you directly

  • LSMW-Allocation Rule and Table creation.

    Hi Team,
    I need to create Allocation Rule and Table by using LSMW. I tried it using Recording method.
    Here issue I am facing is, I need to define Header and Item structure. Upto creation of Header data everything works fine
    For Allocation Rule, we require to upload multiple Customer or Sites with Quota as an Item data against one Base Site group  as in below format.
          SBELN  EKORG  EKGRP  SVBEZ     ARTKL        FILKLP           VQUOT 
    H     0001   1000        A07       Test      246402648   DUMMYGRP
               FILNR   VQUOT 
    D        500055   0
    D        500103   0
    D        500201   206
    For Allocation table, we require to upload multiple customer/sites against single Article and same for next articles as in below format.
      AUFAR  EKORG  EKGRP   BEZCH VKSTP WEFDT     VKSTP  WVDAT       MATNR PMNGE FILKL ASTRA
    H ZNLV     1000     A09          Test     D          03012014  D         21022014 0
            MATNR PMNGE FILKL ASTRA
    D      1234     40         21
    D      1234    40         23
    D      12345  10        10
    Could you please let me know, what changes I need to make in LSMW code so that it will read all Item data and update same at once.
    Thank you in advance.
    Regards,
    Rahul

    I went to SM30. But I dont <b>what to do there to make the field come in SE16</b>.In sm30 when I clicked on No restrictions and display, the field which was missing not able to see. But when I click on restrictions and display, the field is there.

Maybe you are looking for

  • Spry Collapsible Panel animation not working

    I've inserted a spry collapsible panel into a web page and it works fine when I preview it. But when I upload it to the server the animation does not work in both Safari and Firefox (haven't tried any other browsers). The panel is just static, showin

  • Itunes wont install on windows xp pro x64

    right then i am having issues installing itunes on win xp x64 the installer goes all the way then i get the "iTunes has encountered a problem and needs to close. We are sorry for the inconvenience" message then when i send the information i get a lin

  • Group Policy Object "The Network Path Was Not Found"

    I should also mention even when I go to click on the currently created policies I get the same error as well, but I seem to be able to at least get into their settings if I wanted too.

  • Delta load to ODS taking too much time

    Hi, I have one delta load running everyday to ODS .its getting data from R3 & its taking too 3-4 hours daily.due to this data is not getting updated in cube on time and there arises issue with data in Reports Please let me know what are possible solu

  • Trying to connect airport extreme to comcast provided ubee modem. Light on airport green, but no internet access.

    Attempting to connect ubee d3.0 comcast modem to airport extreme base station. Light on airport will go green, but I cannot connect to the internet. I'm sure it has something to do with DHCP/NAT - as the airport extreme is getting an IP address, but