Negating deny-attacker inline best practice

We have recently deployed an inline IPS solution using 5.1(7) E1 software. We would like to deny-attacker-victim-pair-inline for some signatures from one particular subnet on the network but negate the rest.
In order to correctly implement this, I think that we need to use SigEvent Action Filters on the sensor and use the commands <<actions-to-remove/deny-attacker-victim-pair-inline>> for all subnets accept the one that we wish to allow deny actions for.
I have seen that in the configuration on the sensor you can implement under the section <<service network-access>> a <<never-block-networks>> statement. My understanding is that this is used more for shunning rather then deny-inline solutions.
Am I correct about this?
Please could some one on the list validate that this is the best practice solution for negating deny-attackers inline.

create 2 event actions filters.
The first event action filter will match the signatures and subnets you want to deny on and don't subtract any actions. make sure you set it to "stop on match".
The next one will will match the same signatures but the 0.0.0.0-255.255.255.255 address. remove the appropriate actions.
The net result is that the first event action filter will apply when it matches and the second when it doesn't.

Similar Messages

  • Inline Figure Numbering: Best Practice

    Hi all,
    I'm seeking input into what the best practice is for numbering images/figures that are placed inline in a document. I want these figures to have consecutive numbers and to move with the text as the text is edited.
    The crucial part is that I want to do this in a way that I can put a new figure into the document, and all the following figures will be renumbered automatically.
    I can do this if I use a different "Figures" master page that is applied to pages that contain ONLY figures and no text, but I can't seem to figure out the best way to do it inline throughout the document.
    If I anchor objects inline, in the order that I want them to appear, the numbering works just fine, but as soon as I insert and anchor a new figure in the middle with automatic figure numbering, the newest anchored image becomes "Figure 1" and renumbers the entirety of the document.
    Thanks!

    As long as I have all my figures I want to use, and I insert and anchor those figures in the text in the order that I want them to be in, it works. I am using a paragraph style for figure numbers.
    The issue is that as soon as an update to the manual occurrs (a common occurrence in this manual) and I need to add a new figure (with a new figure number) halfway through the document, it numbers the new figure as Figure 1, and completely screws up the number formatting the rest of the way through.
    I'm trying to set it up right the first time, so it works correctly for years to come.

  • Best practice to update inline/publish folio?

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

  • Best practices for using Normalizer in ASA and in AIP-SSM

    Both PIX OS 7.x and IPS 5.x software have a concept of "traffic normalization". PIX OS on ASA can do virtual reassembly, IPS on SSM (so far as I know) can do physical reassembly and fragmentation of IP packets. Also, both ASA and SSM can do TCP normalization. For example, they both can "check inconsistent retransmissions" and protect against "TTL evasion attacks". I realize that PIX OS has only basic normalization functions and the SSM is much more configurable.
    The question is: what are the best practices here? Is it better to disable some IP/TCP PIX OS checks / IPS signatures on ASA and/or SSM? Is it better to use just SSM for traffic normalization? Does anybody has personal experience here?
    Also, there is a BugID CSCsd04327 - "ASA all out of order packets are dropped when sending to ssm"
    "When ips ssm is inline slowness is reported. show service-policy shows that the number of out of order packets reported match exactly the number of no buffer drops (even with queue-limit option). Performance hit is not the result of tcp normalization (on IPS 5.x ssm) in this case, but rather an issue with asa normalizer."
    To me it seems to be more logical to have normalization function on the firewall, but there may be drawbacks in doing this.
    So, those who're using ASA with SSM, please share your experience.
    Thx.

    Yes, this is almost correct ;)
    TCP SRP (Stream Reassemly Processor) is turned OFF on the SSM and cannot be enabled, contrary to 4200 appliances, but IP FRP (Fragmentation Reassembly Processor) is functioning on the SSM.
    The testing of 7.2(1) shows the following:
    When you configure "policy-map" to send packets to the SSM the "tcp-map" parameter "queue-limit", which has the value of zero by default, is set to an X (the X is unknown). This means that the ASA now only accepts the TCP segments which are sent in the correct order. More specifically, the gaps in SEQs are not allowed anymore. When for example, the ASA receives a TCP segment which has a SEQ within the window, but the previous TCP segment has been lost, it sends an ACK to the sender to enforce retransmition of the lost segment. As a result the sender retransmits both segments. Only after that the ASA forwards both segments to the SSM. This basically means that SSM always sees in-order TCP segments. That it is why SRP is not needed on the SSM.
    There are at least two problems however.
    The first problem is the performance impact.
    ASA now acts almost like a proxy. And, so far as I know, it doesn't support SACK (Selective ACKs). First, when the ASA does TCP SEQ randomization it doesn't change SEQ values within the SACK TCP Option. This simply breakes SACK. Second, even if you turn randomization mechanism OFF, then, I believe, the ASA will not selectively ACK the lost TCP segments, as it simply doesn't support this mechanism.
    The second problem is THE SECURITY HOLE.
    By default the ASA doesn't check TCP checksums. The 4200 appliances do check by default. But as we now know the SRP is turned OFF on the SSM... So, this means that SSM module can easily be evaded. The hacker only needs to mix attacking traffic with the random TCP segments that have bad TCP checksum. The SSM module will see the mixture of the two and will not recognize the attack. The target host will drop TCP segments with the bad checksums and see only attacking traffic... This has been successfully verified in the lab.
    Of course, this security hole can be closed with the "tcp-map" parameter "checksum-verification", but it will definitely has performance impact.
    The last note: All of the above has never been documented by Cisco. So, use at your own risk, etc.
    I hope, you will read this message, Marcoa. All of this MUST be documented. Once again, the default behaviour of the ASA opens up a big security hole.
    Regards,
    Oleg Tipisov,
    REDCENTER,
    Moscow

  • Mac Pro 10.7 Server DMZ best practice

    The Mac Pro has 2 gigE, what is the best practice for lion server and DMZ?
    Should I Ignor one and put the server in the DMZ and firewall from the LAN to the server (pain for file share), or use one port for the DMZ and one into the LAN.
    I have been trying to use the two ports and LION SERVER seems to want to bind only to one address (10.1.1.1 DMZ or 192.168.1.1 LAN).
    Does anyone have a best practice for this? I a using a Cisco ASA 5500 for the firewall.
    Thank you

    If you put your server in a DMZ all trafffic will be sent to it unfiltered, in which case the server firewall would be your only line of defense against attack. 
    For better security, set firewall rules in the Cisco that will pass trafffic to the ports you want open and deny traffic on all other ports.  You can also restrict access to specific ports by allowing or denying specific IP addresses or address blocks in the firewall settings.

  • Data Federator XI 3.1 - best practices

    We are planning of rolling out a data federator setup in our company and I'm looking for some best practices.
    The major question I have is, do we install the data federator server components on a dedicated server or can/should we install the components on one of the machines of our BOE r3 cluster (4 nodes -> 2 mgmt and 2 processing)
    Is there any document that contains a summary of the best practices for setting up a data federator environment.
    Kind regards
    Guy

    Hello,
    the advice is to have a specific machine for the DF server.
    DF can become memory and CPU intensive for large queries so a dedicated machine allows to improve DF performances and avoid negative impacts on other services (e.g.BOE).
    A lot of calculation and temporary storage is done in memory so the advice is to add as much RAM as needed for large queries. If the RAM is not large enough you will have disk swap and hence you'll notice a lower performance.
    Hope that it helps
    Regards
    PPaolo

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

  • Best-practice for use of object styles to manage image text wrap issues when aiming at both print and EPUB output?

    I have a work-flow question about object styles, text-wrap, and preparing a long document with lots of images for dual print/EPUB output in InDesign CC 2014.
    I am sort of experienced with InDesign but new to EPUB export. I have hundreds of pages and hundreds of images so I'd like to make my EPUB learning curve, in particular, less painful.
    Let me talk you through what I'm planning and you tell me if it's stupid.
    It's kind of a storybook-look I'm going for. Single column of text (6" by 9" page) with lots of small-to-medium images on the page (one or two images per page), and the text flowing around, sometimes right, sometimes left. Sometimes around the bounding box, sometimes following the edges of the images. So in each case I'm looking to tweak image size and placement and wrap settings so that the image is as close to the relevant text as possible and the layout isn't all wonky. Lovely print page the goal. Lots of fussy trade-offs and deciding what looks best. Inevitably, this will entail local overrides of paragraph styles. So what I want to do, I guess, is get the images as closely placed as possible, before I do any of that overriding. Then I divide my production line.
    1) I set aside the uniformly-styled doc for later EPUB export. (This is wise, right? Start for EPUB export with a doc with pristine styles?)
    2) With the EPUB-bound version set aside, I finish preparing the print side, making all my little tweaks. So many pages, so many images. So many little nudges. If I go back and nudge something at the beginning everything shifts a little. It's broken up into lots of separate stories, but still ... there is no way to make this non-tedious. But what is best practice? I'm basically just doing it by hand, eyeballing it and dropping an inline anchor to some close bit of text in case of some storm, i.e. if there's a major text change my image will still be almost where it belongs. Try to get the early bits right so that I don't have to go back and change them and then mess up stuff later. Object styles don't really help me with that. Do they? I haven't found a good use for them at this stage (Obviously if I had to draw a pink line around each image, or whatever, I'd use object styles for that.)
    Now let me shift back to EPUB. Clearly I need object styles to prepare for export. I'm planning to make a left float style and a right float style and a couple of others for other cases. And I'm basically going to go through the whole doc selecting each image and styling it in whatever way seems likeliest. At this point I will change the inline anchors to above line or custom, since I'm told EPUB doesn't like the inline ones.
    I guess maybe it comes down to this. I realize I have to use object styles for images for EPUB, but for print, manual placement - to make it look just right - and an inline anchor seems best? I sort of feel like if I'm going to bother to use object styles for EPUB I should also use them for print, but maybe that's just not necessary? It feels inefficient to make so many inline anchors and then trade them for a custom thing just for EPUB. But two different outputs means two different workflows. Sometimes you just have to do it twice.
    Does this make sense? What am I missing, before I waste dozens of hours doing it wrong?

    I've moved your question to the InDesign EPUB forum for best results.

  • Best practice for Video over IP using ISDN WAN

    I am looking for the best practice to ensure that the WAN has suffient active ISDN channels to support the video conference connection.
    Reliance on load threshold either -
    Takes to long for the ISDN calls to establish causing the problems for video setup
    - or is too fast to place additional ISDN calls when only data is using the line
    What I need is for the ISDN calls to be pre-established just prior to the video call. Have done this in the past with the "ppp multilink links minimum commmand but this manual intervention isn't the preferred option in this case
    thanks

    This method is as secure as the password: an attacker can see
    the hashed value, and you must assume that they know what has been
    hashed, with what algorithm. Therefore, the challenge in attacking
    this system is simply to hash lots of passwords until you get one
    that gives the same value. Rainbow tables may make this easier than
    you assume.
    Why not use SSL to send the login request? That encrypts the
    entire conversation, making snooping pointless.
    You should still MD5 the password so you don't have to store
    it unencrypted on the server, but that's a side issue.

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • BEST PRACTICE FOR THE REPLACEMENT OF REPORTS CLUSTER

    Hi,
    i've read the noter reports_gueide_to_changed_functionality on OTN.
    On Page 5 ist stated that reports cluster is deprecated.
    Snippet:
    Oracle Application Server High Availability provides the industry’s most
    reliable, resilient, and fault-tolerant application server platform. Oracle
    Reports’ integration with OracleAS High Availability makes sure that your
    enterprise-reporting environment is extremely reliable and fault-tolerant.
    Since using OracleAS High Availability provides a centralized clustering
    mechanism and several cutting-edge features, Oracle Reports clustering is now
    deprecated.
    Please can anyone tell me, what is the best practice to replace reports cluster.
    It's really annoying that the clustering technology is changing in every version of reports!!!
    martin

    hello,
    in reality, reports server "clusters" was more a load balancing solution that a clustering (no shared queue or cache). since it is desirable to have one load-balancing/HA approach for the application server, reports server clustering is deprecated in 10gR2.
    we understand that this frequent change can cause some level of frustration, but it is our strong believe that unifying the HA "attack plan" for all of the app server components will utimatly benefit custoemrs in simpifying their topologies.
    the current best practice is to deploy LBRs (load-balancing routers) with sticky-routing capabilites to distribute requests across middletier nodes in an app-server cluster.
    several custoemrs in high-end environments have already used this kind of configuration to ensure optimal HA for their system.
    thanks,
    philipp

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Best practice for adding text to Flex container?

    Hi,
    I'm having some troubles to lay a TextFlow class out properly
    inside a Flex container. What's the best practice to achieving
    this, for example adding a lot of text to a small Panel?
    Is it possible to pass anything other than a static width and
    height to DisplayObjectContainerController constructor, or is this
    not the place to implement this? I guess what I am looking for is
    the layout logic I'd normally pack into a custom Flex component and
    implement inside measure() and so on.
    My use case: a chat application which adds multiple TextFlow
    elements to a Flex container such as Panel. Or use TextFlow as a
    substitute for UITextField.
    Some example code would help me greatly.
    I'm using Flex 3.2.
    Regards,
    Stefan

    Thanks Brian, the example helps. However problems quickly
    arise if I modify it slightly to this (please compile it to see):
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="
    http://www.adobe.com/2006/mxml"
    layout="absolute" initialize="init()">
    <mx:Script>
    <![CDATA[
    import flashx.textLayout.compose.StandardFlowComposer;
    import
    flashx.textLayout.container.DisplayObjectContainerController;
    import flashx.textLayout.container.IContainerController;
    import flashx.textLayout.elements.TextFlow;
    import flashx.textLayout.conversion.TextFilter;
    private var _container:Sprite;
    private var _textFlow:TextFlow;
    private function init():void
    _container = new Sprite();
    textArea.rawChildren.addChild(_container);
    var markup:String = "<TextFlow xmlns='
    http://ns.adobe.com/textLayout/2008'><p><span>Hello
    World! Hello World! Hello World! Hello World! Hello World! Hello
    World! Hello World! Hello World! Hello World! Hello World! Hello
    World! Hello World! </span></p></TextFlow>";
    _textFlow = TextFilter.importToFlow(markup,
    TextFilter.TEXT_LAYOUT_FORMAT);
    _textFlow.flowComposer.addController(new
    DisplayObjectContainerController(_container, 200, 50));
    _textFlow.flowComposer.updateAllContainers();
    ]]>
    </mx:Script>
    <mx:Canvas width="100" height="100" id="textArea" x="44"
    y="46" backgroundColor="#F5EAEA"/>
    </mx:Application>
    What is the best way to make my textflow behave like a
    'normal' UIComponent in Flex? Should I use UIComponent instead of
    Sprite as a Container? Will that take care of resize behaviour?
    I have never before needed to use rawChildren.addChild for
    example, maybe you can explain why that's needed here?
    I realise that the new Textframework works on an AS basis and
    is not Flex or Flash specific, but this also poses some challenges
    for those of us using the Flex framework primarily.
    I think it would help to have some more basic examples such
    as using the new text features in a 'traditional' context. Say for
    example a TextArea that is just that, a TextArea but with the
    addition of inline images. I personally feel that the provided
    examples largely try to teach me to run before I can walk.
    Many thanks,
    Stefan

  • Best Practice for serving static files (gif, css, js) from front web server

    I am working on optimization of portal performance by moving static files (gif, css, js) to my front web server (apache) for WLP 10 portal application. I end up with moving whole "framework" folder of the portal WebContent to file system served by apache web server (the one which hosts WLS plugin pointing to my WLP cluster). I use <LocationMatch> directives for that:
    Alias /portalapp/framework "/somewhere/servedbyapache/docs/framework"
    <Directory "/somewhere/servedbyapache/docs/framework">
    <FilesMatch "\.(jsp|jspx|layout|shell|theme|xml)$">
    Order allow,deny
    Deny from all
    </FilesMatch>
    </Directory>
    <LocationMatch "/partalapp(?!/framework)">
         SetHandler weblogic-handler
         WLCookieName MYPORTAL
    </LocationMatch>
    So, now browser gets all static files from apache insted of the app server. However, there are several files from bighorn L&F, which are located in the WLP shared lib: skins/bighorn/ window.css, wsrp.css, menu.css, general.css, colors.css; skins/bighorn/borderless/window.css; skeletons/bighorn/js/ util.js, buttons.js; skeleton/bighorn/css/layout.css
    I have to merge these files into the project and physically move them into apache served file system to make mentioned above apache configuration works.
    However, this approach makes me exposed bunch of framework resources, which I do not to intend to change and they should not be change (only custom.css is the place to make custom changes to the bighorn skin). Which is obviously not very elegant solution. The other approach would be intend to create more elaborate expression for LocationMatch (I am not sure it's entirely possible giving location of these shared resources). More radical move - stop using bighorn and create totally custom L&F (skin, skeleton) - which is quire a lot of work (plus - bighorn is working just fine for us).
    I am wondering what is the "Best Practice Approach" approach recommended by Oracle/BEA - giving the fact that I want to serve all static files from my front end apache server instead fo WLS app server.
    Thanks,
    Oleg.

    Oleg,
    you might want to have a look at the official WLP performance support pattern (Metalink DocID 761001.1 ) , which contains a section about "Configuring a Fronting Web Server Serving WebLogic Portal 8.1 Static Artifacts ".
    It was written for WLP 8.1, but most of the settings / recommendations should also to WLP 10.
    --Stefan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for