Best practice calling only records belonging to a category

This is one of those questions that should be simple and straightforward. I am just not sure how to implement.
Here is what is working currently:
I have an app that adds, updates, deletes, and plays video clips from a mySQL database, specifically using the "videoclips" table. Each video has a unique ID (videoID). Each video belongs to one of two categories: (videocategory) "Youtube Video", or "Yahoo Video". I also have a table named "video_category" with a primary id of "videocategoryID"  I use the "video_category" table for a cfselect form input to populate the "videocategory" field of the "videoclips" table.
This all works nicely, but here is one of my hangups and the first part of my question: I am not sure if I should create and populate a videocategoryID field in the "videoclips" table, and I don't know the best way to setup and match the videocategoryID in the "videocategory"table. My database is currently a "myISAM". Should I be using "innoDB" and foreign keys to match up tables, and will this solve the problem of populating the videocategoryID in two separate tables? Yes I know this seems more SQL related, but please read on.
There are two different directions I am considering and need help implementing:
     1. I would like to know how to select and pass mutiple videos belonging to a specific category and display them on a page named for that category "youtubvideo.cfm", or "yahoovideo.cfm".
or this option:
     2. I would like to add a formfield on the same page as the videos table where I can select the category and post back to the same page with the results, i.e. the specific category videos.
I am generating bean, DAO, and Gateway cfc's from the wizard. Using the gateway method, I am able to display all of the videos, and select a specific video to play.  Here are snippets of the page:
videopage.cfm
<!---player defaults to first clip in the database--->
<cfparam name="url.videoid" default="1">
<!---gets all videoclips--->
<cfscript>
    qVideoclips = CreateObject("component", "myvideoapp.Components.VideoclipsGateway").getAllAsQuery();
</cfscript>
<!---gets all videocategories--->
<cfscript>
    qVideoCategory=CreateObject("component", "myvideoapp.Components.video_categoryGateway").getAllAsQuery();
</cfscript>
<!---gets the specific video selected from clicking the Details link--->
<cfset videoComp=CreateObject("component", "myvideoapp.Components.videoclipsGateway")>
<cfif isdefined("url.videoID")>
<cfset video=videoComp.getById(url.videoID)>
<cfelse> No data matches this query
</cfif>
<body>
<!---table that displays the videoclips--->
<!---When details link is selected, the page refreshes with the selected videoID ready to play--->
<cfoutput query="qVideoclips">
              <tr>
                <td>#qVideoclips.currentrow#</td>
                <td>#qVideoclips.videocategory#</td>
                <td>#qVideoclips.videosubject#</td>
                <td>#qVideoclips.videotitle#</td>
                <td>#qVideoclips.videocomments#</td>
                <td><a href="videopage.cfm?videoID=#qVideoclips.videoID#">Details</a></td>
              </tr>
            </cfoutput>
<!---below is the category selection form - I would like this to post back to this page with the above table showing videos from the selected category--->
<fieldset id="cflayoutleft">
        <legend id="cflayoutleftLegend">Categories</legend>
        <cfform>
        <p>
          <label class="top" for="videocategory">Video Category</label>
          <cfselect name="videocategory" id="videocategory" query="qVideoCategory" value="videocategoryname" display="videocategoryname" selected="#video.getvideocategory()#"></cfselect>
        </p>
        <p>
          <cfinput type="hidden" name="videocategoryname" value="#video.getvideocategory()#" validateat="onSubmit">
        </p>
        <div class="submit">
        <p>
          <cfinput type="submit" name="Submit" class="submit" id="Submit" value="Submit">
        </p>
        </cfform>
        </fieldset>
<!---video player--->
<cfoutput>#video.getvideoembed()#</cfoutput>
Keep in mind, I don't have a category ID yet.
So in summary,
I want to display only the videos from a selected category.
I need to know if I can pass a string, which means I could use one table and pass the categoryname; I just don't know how.
If I can only pass "videocategoryID" I would need to create and populate a field in the "videoclips" table called videocategoryID.
I already have a table called video_category table. How can I incorporate it to get the field populated in the "videoclips" table?
I hope this is clear. Help would be greatly appreciated!
Thanks,
Marty P
MP e-commerce

There are a couple of options here if you need to get proxy disabled
1) pinhole with an ACL that allows dhcp to pass your internal servers
2) run dhcp on a switch, router, or firewall in the dmz
3) if you are using a cab,e modem or dsl for the guest users, you can let that do the dhcp
In general I've seen most of these in play, but I like option 2 myself
Sent from Cisco Technical Support iPad App

Similar Messages

  • _msdcs subdomain best practice with NS records?

    I have the _msdcs subfolder under my domain (the grey folder). example below
    It has only one DC inside of it for a NS server. This DC is old and no longer exists. I checked my test environment and it has the same scenario (an old DC that does that not exist). example below
    I'm just wondering:
    1) Is this normal, should this folder update itself with other servers?
    2) should I be adding one of my other DC's? and removing the original?
    I have a single forest, single domain setup 2008 functional level. My normal
    _msdcs Zone does behave as expected and removes and add the appropriate records. Thanks.

    I apologize for the late response. I see you've gone further than what I've recommended.
    No, you shouldn't have deleted the _msdc.parent.local zone!!!!!! I'm not sure why you did that. Are you working with someone else on this that recommended to do that? If not,
    you're over-thinking it. I provide specifics to fix it by simply  updating the NS records, that's it. If you only found the _msdcs folder had the wrong record, then that's all you had to change.
    In cases where DCs are removed, replaced, upgraded, etc, it's also best practice to check a few things to make sure things are in order, and one of them is check the NS records on all zones and delegations. Delegation's NS records won't update
    automatically with changes, but zone NS records will if DCs are properly demoted.
    The _msdcs delegated zone is required by Active Directory. And yes, based on your thread subject, it's best practice. When Windows 2000 came out, and IF you had created the initial domain with it, it did not have it this way, but all domains initially created
    with Windows 2003 and newer are designed this way. If you had upgraded from 2000 to 2003, then one of the steps that we must perform is to create the _msdcs delegation.
    Please re-create it in this order:
    In the DNS console, right-click Forward Lookup Zones, and then click
    New Zone. Click Next
    On the Zone Type page in the New Zone Wizard, click
    Primary zone, and then click to select the Store the zone in Active Directory check box. Click
    Next
    On the Active Directory Zone Replication Scope page, click "To all DNS servers in the Active Directory forest parent.local.
    On the Zone Name page, in the Zone Name box, type
    _msdcs.parent.local
    Complete the wizard by accepting all the default options.
    After you've done that:
    Delete the _msdcs subfolder under parent.local.
    Right-click parent.local, choose New Delegation.
    Type in _msdcs
    In the Nameserver page, type in the name of your server, and its IP address.
    Complete the wizard
    You should now see a grayed out _msdcs folder under parent.local.
    Go to c:\windows\system32\config\ folder
    Find netlogon.dns and rename it to netlogon.dns.old
    Find netlogon.dnb and rename it to netlogon.dnb.old
    Open a command prompt
    Run ipconfig /registerdns
    Run net stop netlogon
    Run net start netlogon
    Wait a few minutes, then click on the _msdcs.parent.local zone, and click the F5 button to refresh it.
    You should see the data populate.
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

  • Best Practice - Long Daily Record List

    Hi,
    I'm new to doing this type of thing in Numbers and I'm struggling with how best to manage a growing journal list. As in:
    - I have a table where I enter a new row every morning. In the row I store values of things like calories, sleep, performance, time (it's mainly to track my cycling) from the day before.
    However, I'm starting to have problems:
    - Now that the table is three months old, it's starting to get really long. What's the best practice here? Sort descending and add a new row at the top every morning? Or is there a way to create a "view" that, for example, only shows the last 14 days?
    - Also, averages that I calculate from a column have become all but meaningless because a single day has so little weight. So what's the best practice here? Is there a way to calculate averages based on, say, the last 14 days only? If I sort descending as above, I guess I could just average the first 14 rows of each column. But I imagine there's a more elegant way to do this that involves date ranges up to the current date (at least, that's how I'd tackle it in FileMaker Pro).
    I suppose this begs the question: is Numbers the right tool? I want to keep this record of my daily cycling stats going forward, hopefully for many years. It would be great to stay with Numbers as it's a pleasure to use and the charts are beautiful. But maybe a database tool is the way to go? Any thoughts? All I really need are a few charts and a few averages. A database does seem like overkill to me.
    Many thanks in advance,
    Pat

    Hi Badunit,
    Thanks for the pointer on extracting the last x values into a new table!  Took me a while to reason it through.
    Slightly more convenient than:
      =OFFSET(Table 1::A$1,ROWS(Table 1::A)−16+ROW(),0)
    May be:
      =INDEX(Data::$A:$B,COUNT(Data::$A)−14+ROW())
    That one doesn't require remembering how to adjust the 14 when only 14 items are wanted.
    It seems COUNT($A) only counts the body rows while ROWS() includes Header and Footer Rows too.
    SG

  • Best Practice to save record in a table with appropriate trigger

    Hi,
    At the same time, five users are inserting client information in a client table through Oracle form. Client table has the following fields:
    CLIENT ID
    CLIENT NAME
    CLIENT ADDRESS
    CLIENT ID is generating automatically by calling a procedure. In this procedure I am using MAX function to get maximum value of CLIENT ID. After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID.
    The problem is that all five users do get same MAX value(suppose 40) of CLIENT ID at the same time, and oracle form will surely throw an exception at the time of insertion the record in client table. CLIENT ID is PK and it is a member of MASTER DATA BLOCK.
    I hope all above will clearly illustrate the problem, further please guide, can PRE-INSERT trigger may handle this problem efficiently ? , if so, then how ? ...
    Thanks,

    Hello,
    Welcome to the forum!
    CLIENT ID is generating automatically by calling a procedure.So, in which trigger you are calling that procedure?
    After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID. I would guess that you are using the code generation procedure in the WHEN-CREATE-RECORD trigger block-level. Because this trigger normally intialize the value for new record insertion. If no then specify.
    further please guide, can PRE-INSERT trigger may handle this problem efficiently ? ,Yes, PRE-INSERT will work without any problem.
    if so, then how ? ... Because, PRE-INSERT will fetch the MAX number from the table at the time of save not at the time of creation. So, suppose five user will enter record at the same time but when they will save then PRE-INSERT will get the max sequence number. And it will never create problem. Because in five user's insertion sure there will be some time difference. So, max number can be get easily without any problem.
    -Ammad

  • Best practice for Book recording vocals...

    My wife has written a children's book that we want to record and offer as a downloadable MP3 set. She will do the recording and we have the mic/studio GarageBand all working just fine. My question is what is the best way to make sure that all the tracks for each chapter are equalized the same?
    I know that we are trying to record it all the same, but I just wanted to see if there was anyway with GB or some other app to take all the seperate vocal tracks and make them all the same in volume, other than looking at the settings in GB which we don't change as far as input goes...
    Hope that is clear and makes sense... I just would like to know what I should do to make it as good as it can be with what we have, then stop worrying about it.
    Cheers,
    Cory

    Hmmm, thanks AppleGuy, that is interesting. I knew that I could create my own presets, but didn't think about this in this way...
    So, if I took the first chapter that we did, adjusted it to my liking, ie. reverb to just fill it out a bit, then saved it as a preset, I could then apply this preset to all my other recordings so that they would all have the same properties? I know this wouldn't actually fix the possible increase or decrease of volume from different recording sessions, but it would allow me to make sure they all have the same effects settings, yes?
    How do you do this to tracks that are already recorded?
    Thanks,

  • VRF Best Practice: LAN only VRF, Mgmt VRF, Global Routing table or VRF?

    I am setting up a routed LAN (not a WAN) environment on two 6500 switches (sup-720). My goal is to create 32 routed environments separated by logical firewalls (multi-context ASA's). So I want a “core” router in each environment, and don't want to buy 32 pairs of 6500's-sorry Cisco.
    Each of these environments are tied together by a core routing environment, running on the same pair of 6500's. No WAN MPLS is going on and I am trying to use VRF for each of the routed environments core router. The management functions of the 6500 shall run off the VRF Core router and ip range (the one that ties all the other VRF's together. Here is a simple diagram:
    VRF1
    ||
    FW1
    ||
    VRFCOR
    ||
    FW2
    ||
    VRF2
    So to go from VRF1 to VRF2, you traverse two firewalls and VRFCOR.
    Several questions related to this design:
    1) Am I nuts to use VRF's in this application?
    2) Is there a better choice than VRF's to do what I want?
    3) Should VRFCOR be the global routing table (IOW, not a VRF)? Or should be its own VRF? Another way to ask this is: Shall a router ever run entirely in VRF tables, or should there be at least one global table in use?
    4) Are there problems with any management protocols on a VRF, such as NTP, AAA, SNMP, LOGGING, TELNET? Or have all those been worked out?
    5) Any other suggestions?
    TIA, Will

    VRF is suited for such kind of an application. Refer to URL http://cisco.com/application/pdf/en/us/guest/netsol/ns171/c649/ccmigration_09186a0080851cc6.pdf to get an idea about the

  • Best practice on calling an Oracle Bpel process

    Trying to find the best practice calling an Oracle Bpel process. I know that I can call the process via database, app server, anothe bpel process, from an application, cron job, etc... I can do any of these but want some feed back on what others do and what method is the best.
    Thanks

    Your right there are a lot of different ways to call the BPEL WS. I guess what I'm asking is if you had several of these options at your disposal what way would you choose?
    I have an asych bpel process that needs to be called once a day to move some data from one db to another. What do you think is the best way to perform this: from the db cron or Oracle job scheduler, the bpel manager on a timer, etc. I'm leaning towards calling it from the db via a cron or an oracle job. Want to know if there is a best practice for something like this.
    Thanks

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • CF10 Production Best Practices

    Is there a document or additional information on the best way to configure multiple instances of CF10 in a production environment? Do most folks install CF10 as a ear/war J2EE deployment under JBoss or Tomcat with Apache as the webserver?

    There’s no such document that I know of, no.
    And here’s a perfect example where “best practices” is such a loaded phrase.
    You wonder if “install CF10 as a ear/war J2EE deployment under JBoss or Tomcat with Apache as the webserver”. I’d say the answer to that is “absolutely not”. Most folks do NOT deploy CF as a JEE ear/war. It’s an option, yes. And if you are running A JEE server already, then it does make great sense to deploy CF as an ear/war on said container.
    But would it be a recommended practice for someone installing CF10 without interest in JEE deployment? I’d say not likely, unless they already have familiarity with JEE deployment.
    Now, could one argue “but there are benefits to deploying CF on a JEE container”? Sure, they could. But would it be a “best practice”? Only in the minds of a small minority I think (those who appreciate the beenfits of native JEE deployment and containers). Of course, CF already deploys on a JEE container (Tomcat in CF10, JRun in CF 6-9), but the Standard and Enterprise Server forms of deployment hide all that detail, which is best for most. With those, we just have a ColdFusion directory and are generally none-the-wiser that it runs on JRun or Tomcat.
    That leads then to the crux of your first sentence: you mention multiple instances. That does change things quite a bit.
    First, a couple point of clarification before proceeding: in CF 7-9, such “multiple instance” deployment was for most folks enabled using the Enterprise Multiserver form of deployment, and created a Jrun4 directory where instances were installed (as distinguished from the Enterprise Server form I just mentioned above, which hid the JRun guts).
    In CF10, though, there is no longer a “multiserver” install option. It’s just that CF10 Enterprise (or Trial or Developer editions) does let you create new instances, using the same Instance Manager in the CF admin that existed for CF Enterprise Multiserver from 7-9. CF10 still only lets you create with the Enterprise (or trial or developer) edition, not Standard.
    (There is a change in CF10 about multiple instances, though: note that in CF10, you never see a Tomcat directory, even if you want “multiple instances”. When you create them, they are created right under the CF10 directory, as siblings to the cfusion directory (and while that cfusion directory previously existed only in the CF 7-9 multiserver form of deployment, it does not exist even in CF10 Standard, as the only instance it can use.)
    So all that is a lot of info, not any “best practices”, but you asked if there was any “additional info”, and I thought that helpful for you to have as you contemplate your options. (And of course, CF10 Enterprise does still let you deploy as a JEE ear/war if you want.)
    But no, doing it would not be a best practices. If someone asked for “the best way to configure multiple instances of CF10 in a production environment”, I’d tell them to just proceed as they would have in CF 7-9, using the same CF Admin Instance Manager capability to create them (and optionally cluster them).
    All that said, everything about CF10 does now run on Tomcat instead of JRun, and some things are improved under the covers, like clustering (and related things, like session replication), because those are now Tomcat-based features (which are actively updated and used by the Tomcat community), rather than JRun-based (which were pretty old and hardly used by anyone since JRun was EOL-ed several years ago).
    I’ll note that I offer a talk with a lot more detail contrasting CF10 on Tomcat to CF9 and earlier on JRun. That may interest you, snormo, so check out the presentations page at carehart.org.
    Hope all that’s helpful.
    /charlie
    PS You conclude with a mention of Apache as the web server. And sure, if one is on a *nix depoyiment or just favors Apache, it’s a fine option. But someone running CF 10 on Windows should not be discouraged from running on IIS. It’s come a long way and is now very secure, flexible, and capable, whether used for one or multiple instances of CF. 

  • Best Practice to save the contacts in the Database

    Hi everybody,
    I´m looking for some tips about the data in the Database. I would like to know what are the best practices saving a record. For example, I want to save the First Name, Second Name and the Last Name in the DB, How can I save them, I mean, the first letter in Uppercase, or all the record in lowercase.
    Ig, Austin Martin or austin martin
    What are the best way or best practice to do that??
    Can someone tell me? I would appreciate it, thx.
    Fabián Carrillo
    Siebel CRM Consultant

    Hi!
    Not quite sure what you're after here. Generally, store the data in the way it's going to be presented - sentence case in the examples you've given.
    If you're thinking along the lines of case sensitivity in querying within Siebel, then take a look at the Case Insensitivity Wizard in Siebel Bookshelf:
    http://download.oracle.com/docs/cd/E14004_01/books/UPG/UPG_DB_Upg_Util10.html
    Regards,
    mroshaw

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • "Best practice" for components calling components on different panels.

    I'm very new to Swing. I have been learning from tutorials, but these are always relatively simple interfaces , in which every component and container is initialised and added in the constructor of a main JFrame (extension) object.
    I would assume that more complex, real-world examples would have JPanels initialise themselves. For example, I am working on a project in which the JFrame holds multiple JPanels. One of these Panels holds a group of JToggleButtons (grouped in a ButtonGroup). The action event for each button involves calling the repaint method of one of the other Panels.
    Obviously, if you initialise everything in the JFrame, you can simply have the ActionListener refer to the other JPanel directly, by making the ActionListener a nested class within the JFrame class. However, I would like the JPanels to initialise their own components, including setting the button actions, by using an extension of class JPanel which includes the ActionListeners as nested classes. Therefore the ActionListener has no direct access to JPanel it needs to repaint.
    What, then, is considered "best practice" for allowing these components to interact (not simply in this situation, but more generally)? Should I pass a reference to the JPanel that needs to be repainted to the JPanel that contains the ActionListeners? Should I notify the main JFrame that the Action event has fired, and then have that call "repaint"? Or is there a more common or more correct way of doing this?
    Similarly, one of the JPanels needs to use a field belonging to the JFrame that holds it. Should I pass a reference to this object to the JPanel, or should I have the JPanel use "getParent()", or some other method?
    I realise there are no concrete answers to this query, but I am wondering whether there are accepted practices for achieving this. My instinct is to simply pass a JPanel reference to the JPanel that needs to call repaint, but I am unsure how extensible this would be, how tightly coupled these classes would become.
    Any advice anybody could give me would be much appreciated. Sorry the question is so long-winded. :)

    Hello,
    nice to get feedback.
    I've been looking at a few resources on this issue from my last post. In my application I have been using the Observer and Observable classes to implement the MVC pattern suggested by T.PD.(...)
    Two issues (not fatal, but annoying) with this are:
    -Observable is a class, not an interface; since most of my Observers already extend JPanel (or some such), I have had to create inner classes.
    -If an Observer is observing multiple Observables, it will have to determine which Observer called its update() method (by using reference equality or class comparison or whatever). Again, a very minor issue, but something to keep in mind.I don't deem those issues are minor. The second one in particular, is rather annoying in terms of maintenance ("Err, remind me, which widget is calling this "update()" method?").
    In addition to that, the Observable/Observer are legacy non-generified classes, that incurr a loosely-typed approach (the subject and context arguments to the update(Observable subject, Object context) methods give hardly any info in themselves, and they generally have to be cast to provide app-specific information.
    Note that the "notification model" from AWT and Swing widgets is not Observer-Observable, but merely EventListener . Although we can only guess what reasons made them develop a specific notification model, I deem this essentially stems from those reasons.
    The contrasting appraoches are discussed in this article from Bill Venners: The Event Generator Idiom (http://www.artima.com/designtechniques/eventgenP.html).
    N.B.: this article is from a previous-millenary series of "Design Techniques" articles that I found very useful when I learned OO design (GUI or not).
    One last nail against the Observer/Observable model: these are general classes that can be used regardless of the context (GUI/non-GUI code), so this makes it easier to forget about Swing threading rules when using them (essentially: is the update method called in the EDT or not).
    If anybody has any information on the performance or efficiency of using Observable/ObserverI would be very surprised if this had any performance impact. If it had, that would mean that you have either:
    - a lot of widgets that are listening to one another (and then the Mediator pattern is almost a must to structure such entangled dependencies). And even then I don't think there could be any impact below a few thousands widgets.
    - expensive or long-running computation in the update methods. That's unrelated to the notification model itself.
    - a lot of non-GUI components that use the Observer/Observable to communicate among themselves - all the more risk then, to have a GUI update() called outside the EDT, see remark above.
    (or whether there are inbuilt equivalents for Swing components)See discussion above.
    As far as your remark 2 goes (if one observer observes more than one subjects, the update() method contains branching logic) : this also occurs with the Event Delegation model indeed: for example, it is quite common that people complain that their actionPerformed() method becomes unwieldy when the same class listens for several JButtons.
    The usual advice for this is, use anonymous listeners, each of which handles the event from only one source (and generally very close in code to the definition of that source), and that simply translates the "generic" event notification method into a specific method call of a Controller or Mediator .
    Best regards.
    J.
    Edited by: jduprez on May 9, 2011 10:10 AM

  • We are evaluating the use of iPod touch devices to record best practice videos on our manufacturing floor and to post to an internal Moodle web site. How can you upload a video from the iPod touch to a site other than YouTube?

    We are evaluating the use of iPod touch devices to record best practice videos on our manufacturing floor and to post to an internal Moodle web site. How can you upload a video from the iPod touch to a site other than YouTube? The Moodle upload interface is expecting a file selection dialog box like windows or OSX. I do not want to have to go through an intermediary step of messing with a pc.
    Thanks!

    It should be around 7 and a half gigs. In iTunes, across the bottom there should be a bar that show how much storage is being used and by what. (music, movies, apps, etc.) To make music take up less room, you can check the box to make it convert the music to 128kbps AAC. This lowers the quality, but with most earbuds and speakers, you can't even tell the difference.
    The iPod touch has parental controls built in. You'll find them in Settings. I think they only work for enabling/disabling Safari, Mail, YouTube, and App Store. Here's an app that does more: http://www.mobicip.com/online_safety/ipod_touch

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best practice for calling stored procedures as target

    The scenario is this:
    1) Source is from a file or oracle table
    2) Target will always be oracle pl/sql stored procedures which do the insert or update (APIs).
    3) Each failure from the stored procedure must log an error so the user can re-submit the corrected file for those error records
    There is no option to create an E$ table, since there is no control option for the flow around procedures.
    Is there a best practice around moving data into Oracle via procedures? In Oracle EBS, many of the interfaces are pure stored procs and not batch interface tables. I am concerned that I must build dozens of custom error tables around these apis. Then it feels like it would be easier to just write pl/sql batch jobs and schedule with concurrent manager in EBS (skip ODI completely). In that case, one could write to the concurrent manager log and the user could view the errors and correct.
    I can get a simple procedure to work in ODI where the source is the SQL, and the target is the pl/sql call to the stored proc in the database. It loops through every row in the sql source and calls the pl/sql code.
    But I can not see how to set which rows have failed and which table would log errors to begin with.
    Thank you,
    Erik

    Hi Erik,
    Please, take a look in these posts:
    http://odiexperts.com/?p=666
    http://odiexperts.com/?p=742
    They could help you in a way to solve your problem.
    I already used it to call Oracle EBS API's and worked pretty well.
    I believe that an IKM could be build to automate all the work but I never stopped to try...
    Does it help you?
    Cezar Santos
    http://odiexperts.com

Maybe you are looking for

  • My phone is stuck in restore mode and I need help asap!

    My phone was running slow so I turned it off. When I turned it off and tried to turn it back on it had the little connect to itunes symbol so I went and plugges it in. As soon as I did it said that it couldnt connect to itunes server so i got on here

  • Thunderbolt and motu 8pre

    Hi Does anyone know how to get the MOTU 8pre to work with a Firewire 400 to 800 and Thunderbolt adapter on a 2.7 GHz Intel Core i5 Imac? Motu support suggest a 400 -800 cable but I have tried this and it just doesn't work!!

  • CRUD insert problems.

    Hello all, I was wondering about DB operations in JSF pages, so I took a look over the Single Page CRUD example. What hitted me was there is a need for a two step insertion, first by issuing a select in search for the biggest ID of the primary key, a

  • Question about Local and Central Services Registry

    Hi guru, We are using CE7.2 and have deployed some BPM DC to it. After deployment, we can see the web service (we made to start the process) from the local SR. Now we want to build a central SR on aonther machine (AS Java + SLD + SR) and re-point the

  • Ld problems

    /usr/bin/ld: cannot find -lHStransformers-0.2.2.0-ghc6.12.3 collect2: ld returned 1 exit status     Aborting... error: Build failed [root@vps-626-1 snap]# updatedb [root@vps-626-1 snap]# locate HStran /root/.cabal/lib/transformers-0.2.2.0/ghc-6.12.3/