Right way to fire table changes

suppose i have a class that extends AbstractTableModel and i used a List to store data, like
public class MyTableModel extends AbstractTableModel
List dataList = Collections.synchronizedList(new LinkedList());
public void insertMyData(String data)
synchronized(dataList)
// ... do something
##Line A##
##Line B##
If i want to fire table changes, say like using
this.fireTableDataChanged();
where should i place it?
at ##Line A## or ##Line B## ???
or should I use
SwingUtilities.invokeLater(new Runnable(){public void run(){
  this.fireTableDataChanged();
Moreover, i would like to ask, if I implement my own TableCellRender and overriding the getTableCellRendererComponent method, all the calls like setText, setBackground, setForeground,...
should i use SwingUtilities.invokeLater to do that? or just call it as usual ?
Thanks alot

Since you mention synchronized several times, I assume your model will be accessed from different threads
(although you didn't write this.) Client code will have to be careful: getRowCount may return 10, but by the
time the client gets to row 7 it may no longer exist! What does your model do when getValueAt is based
indices that are out of bounds?
As far as synchronization goes, I think you are going overboard with:
List dataList = Collections.synchronizedList(new LinkedList());...since you are using synchronization blocks elsewhere. In general, I find little use for those synchronizedXXX
factory methods -- usually the class having a collection member provides the thread safety.
As far as the listeners go, model listeners assume their event handlers are executing in the EDT, so
you will have to call the fire*** methods from the EDT. This means you have to use invokeLater or invokeAndWait.
Using invokeAndWait defeats the multithreading you are building, so I think you
should call invokeLater, and since this is an asynchronous-style call it makes no sense doing it
from within a synchronized block. Just be aware that your JTable class may quety the model
out of bounds.
Also, if your model's data is written or changed seldom but read often, I would use the serial immutable
pattern. Have the List reference dataList point to list whose state is immutable. If you want to change
the data, you make a copy, change that and then have dataList refer to the updated copy.

Similar Messages

  • How to fire "table change" once click other regions outside the table erea?

    I have a JTable created with DefaultTableModel. Usually the edit behavior of table cell is like this:
    1) Select one cell, it is highlighted in blue.
    2) Double click the cell, it's made editable with white background.
    3) Two scenarios
    (a) Once finish edit, press "Enter", the cell background turns from white to blue. At the same time, the table change is "fired" .
    (b) Once finish edit, click another cell, the originall cell turns white, and the new-clicked cell is highlighted in blue. At the same time, the table change is "fired".
    4) While editing one cell, and click somewhere OUTSIDE the table area. In this case, the table change is NOT fired.
    [This cause a confusion, user sees the changed text in table cell, but actually it is not stored and will not be stored in the table.]
    So what I want to implement is: once click the region outside the table, the text in that editabe cell (where you are just editing) is stored and fired.
    Can you help me? Thanks in advance!

    sabre150 & other experts,
    [see background in my first message]
    User is still not completely satisfied with the current behavior, though "table changed" can be fired when clicking other controls(eg. buttons or checkboxes).
    If I click a panel region(instead of a particular control), the "table changed" can not be fired. However, it is expected.
    I guess the reason is:
    a panel doesn't has a focus. so when clicking panel, the focus is not "lost" to the table cell.
    Any hints?

  • Right Way to change MBOs Connections

    Hellow Experts,
    I'm really enjoying the communication here, and I got a new one.
    Whats the right way of change MBOs Connections to SAP for instance?
    I tried 2 methods.
    Method One
    Title: Its just too much Work
    Proceeding:
    I start rebinding every MBO and  operatios to the new Connection.
    Drawback:
    If i have 1 MBO it may takes 30 seconds if they are 40 or 50 mbos it will take a lot of time.
    there are the output tables from bapis are recreated if dont they are deleted.
    so you have to reimplement all the operations one again. and it as the adicional
    problem that we all can miss something and do something wrong.
    Method Two
    Title: Simpler way
    Proceeding:
    change the connection profile itself at workspace from DEV->QA or QA->PRD
    Drawback:
    supposing that you want independence between the apps connections  like app1 is on DEV and ap 2 is on PRD.
    this will led to N connections per N applications.
    DEV: (Development)
    QA: (Quality)
    PRD: (Production)
    I would like to know if you guys have more approaches and what you think of this ones.
    Cheers,
    Laguerta

    Daniel Laguerta
    I will never follow method one as suggested by you.
    To add for the "Method 2", i will create a separate (new) workspace for different instance like DEV, QA, PROD.
    eg. Once DEV is done, i ll export the whole project, and import the same in a new workspace for QA.
         Create a new SUP QA and SAP connection profile
    Change connection profile to each mbo and its operations.
    And deploy to SUP QA Env
    Still it is time consuming.
    there is another method (should be preferable)
    Deploying mbo package from SCC instead of creating workspace and connection profiles.
    Deploying MBO from Sybase Control Center (SCC)
    Regards,
    JK

  • What is the right  way to display a table in Java web dynpro using a node.

    Hi experts,
      I am trying to show a node of cardinality 0...n as a table in an adobe form in Java web dynpro. But its not showing it properly. Can anybody please tell me what is the right way to display a table on adobe form using a node of cardinality 0...n or 1...n in Java Webdynpro.  In ABAP webdynpro, we can drag and drop a node of cardianlity 0...n or 1...n to  show as a table and it works fine. Is the same possible in Java webdynpro also. Please help.
    Thanks and Regards.
    Vaibhav Tiwari.

    Please refer to my post.. you will get the answer
    Dynamic Table -  same data repeating in all rows
    Special care should be taken in designing the context for table attribute.
    The attribute type singletone also plays a important role. I have this doubt from the beginning when you have reported this problem for the first time but finally you marked it as solved so i thought there might be some other issues but again when you reported that again i did some analysis.
    Now coming to final solution :
    For designing a table in adobe interactive form you have consider following
    You have to design the view context upto three level, I am explaining you the properties
    PDFDataSource (Parent Level1) - Cardinality 1:1 - Signetone -True - This is assigned to datasource
    TableList (Parent Level2) - Cardinality (1:1) - Signetone -True
    TableWrapper(Parent Level3) - Cardinality (0:n) - Signetone -True
    TableData (Parent Level4) - Cardinality (0:1) - Signetone - false (This is the main point)
    Then under TableData value node, you have to put all your table attributes.
    This Value Node name can be anything but hierarchy should be same as I have mentioned above.
    Please try out these steps and get back to me if you have any doubt.

  • What is the right way to change the active JInternalFrame for and MDI app?

    I am working on my own implementation of the window menu. The action that is triggered when a customer chooses a window to activate from the list in the menu is not behaving as I expected. The code I wrote (below) switches frames correctly but the caption bar never gets updated and if you restore a frame from an icon the frame is not correctly activated, there is even a restore button which if you push fixes things up and the frame then behaves normally     if (frameToActivate.isIcon ())  {
              //  Restore from icon
              desktopPane().getDesktopManager().deiconifyFrame (frameToActivate);
         desktopPane().getDesktopManager().activateFrame (frameToActivate);
         desktopPane().setSelectedFrame (frameToActivate);I did a search of the web and found a tip on JavaWorld (http://www.javaworld.com/javaworld/jw-05-2001/jw-0525-mdi.html) which led me to try doing this from a different angle - the JInternalFrame's point of view. This code works     try {
              if (frameToActivate.isIcon ())  {
                   //  Restore from icon
                   frameToActivate.setIcon (false);
              frameToActivate.moveToFront();
              frameToActivate.setSelected (true);
         } catch (PropertyVetoException error) {
              error.printStackTrace();
         }My question is why does the desktop based approach not work? If methods exist that appear to let you restore and switch between frames why are the ineffective? Am I missing something obvious that I should be doing?
    If using the JInternalFrame methods is the right way to go then I will It just seems like if the DesktopManager has methods that advertise that is supports managing the active frame then they should work. Before I ignore them I want to check with you to see if there is a right way to use them.
    Ian

    So, this is another batch of duke dollars I cannot assign - since I solved my own problem:-)
    I had an epiphany and tried setting break points to see what code was executed when you click on an inactive frame. From that I determined that DefaultDesktopManager.activateFrame, as implemented, does not activate the frame but acknowledges the activation of a frame and does a small amount of bookeeping work for the DesktopManager. So, the solution is the code I wrote to switch focus using the JInternalFrame's methods. Since I did not want to have to write those nine lines of code in the couple of places I want to programmatically switch the active frame I added a get/setActiveFrame method to my JDesktopPane derivative. In case others face this problem here is the code (warning I have not yet setup building the JavaDocs for this project so I cannot vouch for the validity of the JavaDoc, but the code does work):/**
    * Bring frameToActivate to the front (restoring from icon if neccessary) and make it the
    * selected frame.  This method does all the things required to switch the active frame for
    * an MDI application unlike: @link JDesktopPane.setSelectedFrame, which does not change the
    * focus; @link javax.swing.DefaultDesktopManager.activateFrame which does not correctly
    * handle iconified frames or switch the focus properly; and
    * @link javax.swing.JInternalFramesetSelected which also does not handle iconified frames.
    * @param frameToActivate the frame to bring to the front and become the active window
    * @throws IllegalArgumentException
    public void setActiveFrame (JInternalFrame frameToActivate) throws IllegalArgumentException  {
        if (frameToActivate == null)
            throw new IllegalArgumentException ("setActiveFrame a frame must be passed a non null valie.");
        try {
            if (frameToActivate.isIcon ())  {
                //  Restore from icon
                frameToActivate.setIcon (false);
            frameToActivate.moveToFront();
            frameToActivate.setSelected (true);
        } catch (PropertyVetoException error) {
    * This method returns the currently active frame.  This method returns the same frame
    * as <code>getSelectedFrame</code> and is provided for symetry for <code>setActiveFrame</code>. 
    * @return the currently active frame
    * @see LDesktopPane.setActiveFrame
    * @see javax.swing.JDesktopPane.getSelectedFrame
    public JInternalFrame getActiveFrame ()  {
        return getSelectedFrame ();
    }IL

  • Help with creating a sql file that will capture any database table changes.

    We are in the process of creating DROP/Create tables, and using exp/imp data into the tables (the data is in flat files).
    Our client is bit curious to work with. They do the alterations to their database (change the layout, change the datatype, drops tables) without our knowing. This has created a hell lot of issues with us.
    Is there a way that we can create a sql script which can capture any table changes on the database, so that when the client trys to execute imp batch file, the sql file should first check to see if any changes are made. If made, then it should stop execution and give an error message.
    Any help/suggestions would be highly appreciable.
    Thanks,

    Just to clarify...
    1. DDL commands are like CREATE, DROP, ALTER. (These are different than DML commands - INSERT, UPDATE, DELETE).
    2. The DDL trigger is created at the database level, not on each table. You only need one DDL trigger.
    3. You can choose the DDL commands for which you want the trigger to fire (probably, you'll want CREATE, DROP, ALTER, at a minimum).
    4. The DDL trigger only fires when one of these DDL commands is run.
    Whether you have 50 tables or 50,000 tables is not significant to performance in this context.
    What's signficant is how often you'll be executing the DDL commands on which the trigger is set to fire and whether the DDL commands execute in acceptable time with the trigger in place.

  • How to create a custom panel in the right way (without having an empty panel in the file info) ?

    Hi Everyone
    My name is Daté.
    I'm working in the fashion industry as a designer and Design consultant to help fashion brands improving the design workflow by using Adobe softwares and especially Illustrator.
    I'm not a developper, but i'm very interested about the possibility to introduce xmp technology to provide more DAM workflows in the fashion industry.
    Fashion designers produce a lot of graphical objects in illustrator or Photoshop. Unfortunately they are faced to a big challenge which is about how to manage, search, classify and get this files faster. Of course PDM system or PLM system are used in the Fashion industry to manage data, but for many companies, implemanting this kind of database is very complex.
    When i look at what you can do with xmp, it seems to be an interesting way of managing design files, then i started to follow Adobe instruction to try to build a custom panel.
    The main idea is to (Theory) :
    create custom panels used by fashion designers to classify their design files.
    Use Adobe Bridge to search files, create smart collection to make basic reports in pdf and slideshows
    Find someone to make a script able to export metadata in xml files
    Use indesign and the xml file to generate automatically catalogues or technical sheets based on xmp values
    I have created a custom panel by using the generic panel provided by Adobe and i have modified the fields to feet with the terms used in the fashion industry and it works well.
    But unfortunately, when i try to create my own custom panel from scratch with Flashbuilder (4.6) and the Adobe CSExtensionBuilder_2 (Trial version), it doesn't work!
    Here is the process :
    I have installed flashbuilder 4.6
    I have download the XMP Fileinfo SDK 5.1 and placed the com.adobe.xmp.sdk.fileinfo_fb4_1.1.0.jar in C:\Program Files (x86)\Adobe\Adobe Flash Builder 4.6\eclipse\plugins
    In Flashbuilder, i have created a new project and select xmp Custom panel
    The new project is created in flashbuilder with a field with A BASIC Description Field
    To generate the panel, right click the project folder and select xmp / Publish Custom Panel
    The panel is automatically generated in the following folder : C:\Users\AppData\Roaming\Adobe\XMP\custom file info panels\3.0\panels
      Go to illustrator, Open the file Info
    The panel appears empty
    The others panel are also empty
    The panel is created and automatically placed in the right folder, but when you open it in Illustrator by selecting the File Info option in the File Menu, this custom panel appears empty!!! (only the title of the tab is displayed). This panel also prevent the other panels to be displayed.
    When you delete this custom panels from the folder C:\Users\AppData\Roaming\Adobe\XMP\custom file info panels\3.0\panels and go back to the File Info, the other panels display their content properly.
    I also try to use the plugin XMP Namespace designer to create my own namespace. this plugin is also able to generate a custom panel, but this one also appears empty in AI or Photoshop.
    I try to follow the process described in Adobe xmp documentation many times, but it didn't works.
    It seems that many peaople have this issue, but i dodn't find a solution in the forum.
    I try to create a trust file (cfg), but it didn't work.
    It would be so kind if you can help me to understand why i can't create a custom panel normally and how to do it in the right way.
    Thanks a lot for your help,
    Best regards,
    Daté 

    Hi Sunil,
    After many trial, i realize the problem was not coming from the trust file, but from the way i have created the custom panel.
    There is 2 different ways, the first described below is not working whereas the second is fine :
    METHOD 1 :
    I have downloaded the XMP-Fileinfo-SDK-CS6
    In the XMP-Fileinfo-SDK-CS6 folder, i copied the com.adobe.xmp.sdk.fileinfo_fb4x_1.2.0.jar plugin from the Tools folder and i pasted it in the plugind folder of Flashbuilder 4.6
    The plugin install an XMP project
    In Flashbuilder 4.6 i have created a new project (File / New /Project /XMP/XMP Custom Panel)
    A new xmp project is created in flashbuilder.
    You can publish this project by right clicking the root folder and selecting XMP / Publish Custom Panel
    The custom file info panel is automatically published in the right location which is on Mac : /Users/UserName/Library/Application Support/Adobe/XMP/Custom File Info Panels/3.0 or /Users/UserName/Library/Application Support/Adobe/XMP/Custom File Info Panels/4.0
    Despite the publication of the custom file info panel and the creation of a trust file in the following location : "/Library/Application Support/Macromedia/FlashPlayerTrust", the panel is blank in Illustrator.
    I try this way several times, with no good results.
    METHOD 2 :
    I have installed Adobe CSExtensionBuilder 2.1 in Flash Builder
    In FlashBuilder i have created a new project (File / New /Project /Adobe Creative Suite Extension Builder/XMP Fileinfo Panel Project)
    As the system display a warning about the version of the sdk to use to create correctly a custom file info, I changed the sdk to sdk3.5A
    The warning message is : "XMP FileInfo Panel Projects must be built with Flex 3.3, 3.4 or 3.5 SDK. Building with Flex 4.6.0 SDK may result in runtime errors"
    When i publish this File info panel project (right click the root folder and select Run as / Adobe illustrator), the panel is published correctly.
    The last step is to create the trust file to display the fields in the panel and everything is working fine in Illustrator.
    The second method seems to be the right way.
    For sure something is missing in the first method, and i don't understand the difference between the XMP Custom Panel Project and the XMP Fileinfo Panel Project. Maybe you can explain it to me.
    So what is the best solution ? the right sdk to use acording to the creative suite (the system asks to use 3.3 or 3.5 sdk for custom panels, so why ?)
    I'm agree with Pedro, a step by step tutorial about this will help a lot of peaople, because it's not so easy to understand!!!
    Sunil, as you belong to the staff team, can you tell me if there is  :
    A plugin or a software capable to extract the XMP from llustrator files to generate XML workflows in Indesign to create catalogues
    A plugin to allow indesign to get custom XMP in live caption
    A plugin to allow Bridge to get custom XMP in the Outputmode to make pdf or web galeries from a smart collection
    How can you print the XMP data with the thumbnail of the file ?
    Thanks a lot for your reply.
    Best Regards
    Daté

  • Right way of configuring higher MTU over a Port Channel

    Hi guys,
    I have a running critical Port-Channel between two locations.
    Here's the config
    SW1:
    interface Port-channel2
     switchport
     switchport trunk encapsulation dot1q
     switchport mode trunk
    end
    interface GigabitEthernet1/45
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-protocol lacp
     channel-group 2 mode active
    end
    interface GigabitEthernet1/46
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-protocol lacp
     channel-group 2 mode active
    end
    SW2
    interface GigabitEthernet1/1
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-protocol lacp
     channel-group 2 mode passive
    end
    interface GigabitEthernet1/2
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-protocol lacp
     channel-group 2 mode passive
    end
    interface Port-channel2
     switchport
     switchport trunk encapsulation dot1q
     switchport mode trunk
    end
    Now I need to increase the MTU from default value to 9198. What the right way to do it and avoid any connectivity loss, PortChannel restart.
    Does it matter what switch I start first?
    Thanks!
    L.E. both SW are WS-C4948

    Hi,
    Because you are using layer 2 interfaces - there is no fragmentation support at layer 2, and interfaces receiving frames which have an unsupported size will be dropped.
    I think the best way for you to proceed is to lab this up; and verify what happens - it may be that you need to make changes on switches at either end of the channel within a very short time frame to prevent too large an outage.
     When you are ready to maike your change - think the best way to do this is to use the interface range command, and apply the 'mtu' command to all the interfaces in this range. I don't think it matters which switch you apply this change to first, and I don't believe if you are hinting at the 802.3ad (controlled by system-priority) decision maker, that it makes any difference.
    HTH
    Mike

  • Right way to configure Tomcat JDBC resource for MaxDB

    Hi,
    i need to configure one tomcat server with a pool of connections for a MaxDB instance with follow database parameters:
    MAXUSERTASKS=150
    SESSION_TIMEOUT=60
    i do this configuration on tomcat:
    <Resource name="jdbc/myApp"
                    auth="Container"
                    type="javax.sql.DataSource"
                    username="DBUSER"
                    password="secret"
                    driverClassName="com.sap.dbtech.jdbc.DriverSapDB"
                    url="jdbc:sapdb://dbserver/DBNAME"
                    maxActive="150"
                    maxIdle="75"
                    validationQuery="SELECT NOW() FROM DBA.DUAL"/>
    is this the right way? there is a way to check if the pool is working at database side? at java side i checked that the javax.sql.DataSource.loginTimeout is Unavailable, then what is the better way to avoid errors like:
    2008-06-02 13:54:36  2165 WNG 11824 COMMUNIC Releasing  T187 command timeout
    i think that this problem occurs because the SESSION_TIMEOUT is 60, and the loginTimeout is Unavailable at javax.sql.DataSource then the DBCP of tomcat dont know when need to revalidate the session.
    thanks for any help.
    Clóvis

    Hi Clovis,
    to be honest, I'm no Tomcat expert at all. All I can do is read the documentation.
    The error message you posted was about an timeout due to a too long idle session.
    From the changes in your connection pool setup I cannot see any parameter that would influence this.
    Instead - as already written - you may either want to change the session timeout in general, for the user or (by adding some connection parameters) for the sessions.
    Concerning the question on wether it's better to have many connections or just a few - well this depends on what the java tasks do with the sessions.
    The connection pool is used for saving resources on the database side and the time necessary to logon and create a session.
    So whenever any of the java tasks needs a db connections it should "just get one" from the pool.
    When you have "high load" (by the way what exactly does that mean? High CPU usage on the J2EE server? Huge transaction volume? Many parallel transactions?) this does not change.
    Anyhow you have to be aware of the fact that the database can only process as many SQL requests in parallel as there are user kernel threads (UKTs) and each of the UKTs need to be able to run on a seperate CPU core. Therefore if it's necessary to have real parallel SQL activity many connections may not be the best idea.
    On the other hand this requirement is seldom necessary - most often sessions can wait a bit until they get the reply from the database and synchronize on each other later on. At least that's how we do it with NetWeaver.
    KR Lars

  • Right way to declare field symbols

    i am new to field symbols, pls  help
    i want to know which  is the right way of declaration ?
    types: begin of typ_tab,
               vbeln  type vbap-vbeln,
               posnr type vbap-posnr,
               matnr  type vbap-matnr,
              end of typ_tab.
    data:  itab type standard table of typ_tab,
              wa  type typ_tab.
    field-symbols: <f_t> type standard table, 
                           <f_w> type any,                or      <f_w>  like  line of typ_tab,  ??
                            <f_s> type any.
    select  vbeln posnr  ....... into itab......
    assign  itab to <f_t>.
    assign wa  to <f_w>.
    loop at <f_t> assigning <f_w>.
    assign component 'vbeln' of structure <f_w>  to <f_s>.
    write : / <f_s>.
    endloop.
    OR
    loop at itab assigning <f_w>. 
    write: / <f_w>-vbeln.
    endloop. 
    r these 2 statements correctt ? if so then which one  to use,  pls show with an example. wats the diff btwn above 2  types of  looping statements ?   how to use read statement on this ?

    Hello Saurajit,
    Both the way are correct. And it depends on the requirement, which one to use.
    <f_w> TYPE ty_tab - is TYPED field symbol - if you know TYPE before runtime you can use this. It will just work like a static work area.
    <f_w> TYPE any - is not TYPED and structure of this field symbol is defined during runtime using ASSIGN statement. And fields of the structure is read by ASSIGN COMPONENT <<field name>> statement.

  • Looking for a better way to observe POJO changes

    Let's say you have a POJO like Employee, and you want to make changes to the fields in the POJO observable. What's the best way to do this? The listeners should be able to register for change events, without necessarily subscribing to every property.
    I could create an ObservableEmployeeHelper that takes an employee and wraps each of the fields in Simple*Property objects, and then fires notifications whenever any of those properties change. But it would be nice for the listeners to be able to pick and choose what they want to listen to -- something like this:
    addListener(EmployeeEvent.ALL, new ChangeListener<ObservableEmployeeHelper>(){...});or
    addListener(EmployeeEvent.SALARY_CHANGE, new ChangeListener<ObservableEmployeeHelper>(){...});The other problem is that the ListCell and TableCell's really don't know how to deal with a change to the POJO instead of to the a single observable attribute. For example, if Employee has a status field (ON_HOLIDAY, ON_SICK_LEAVE, ON_SITE, WORKING_FROM HOME) and you want to display an icon next to the user's name indicating this status. It's difficult to do in way that automatically updates the icon in a list whenever you change the status. My hack for this to attach a listener to the Employee observable status property and watch for changes. But this can lead quickly to memory leaks if not done well.
    And currently if you put the Observable POJOs into an ObservableList, you won't get notified if a particular POJO changes, only if the entire list changes.
    Is there an easier way to watch for changes to any field in a POJO?
    It would be nice if there was an @Observable annotation that you could just add to the fields of your POJO and some kind of mixin that you could add to the class that would propagate changes, register listeners, etc.
    Edited by: mfortner on Aug 6, 2012 5:00 PM

    I think you'll need to build somekind of custom solution for this.
    I've done this for my own program so I can monitor changes in any of the properties using just one listener. This helps me to do two things:
    1) Monitor when an object has changed (if any of its properties was changed), and register the object for future storage (unless modified again within a specific timeout)
    2) Monitor when an object was accessed (if a specific set of properties was read), and schedule the object for enrichment (fetching additional data in the background)
    Using properties for this is great because a lot of action can simply be done in the background and as soon as the information is available the UI will update. For example, when the Photo property is accessed the first time by binding it to an ImageView, the Photo won't have loaded yet (it's null). The access is noticed though, and so a background thread loads the Photo from the database. When this process is finished, it updates the Photo property -- thanks to the binding to ImageView, the Photo will be updated automatically in the UI.
    The advantage of course is that I donot need to fetch these things myself anymore -- the program will do this automatically... and for only those objects that actually were accessed (the rest will remain in a minimal state until actually used).
    Same for the automated storage. I just bind a property to the UI, or multiple of them, and when any of them is changed, a process starts that waits for 3 seconds before storing the object. If any more changes occured in the mean time, the timer resets.

  • Different Ways To Gather Table Statistics

    Can anyone tell me the difference(s), if any, between these two ways to refresh table stats:
    1. Execute this procedure which includes everything owned by the specified user
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'USERNAME', degree => dbms_stats.auto_degree);
    2. Execute this statement for each table owned by the specifed user
    analyze table USERNAME.TABLENAME compute statistics;
    Generally speaking, is one way better than the other? Do they act differently and how?

    In Oracle's automatic stats collection, not all object are included in stats collection.
    Only those tables, which has stale stats are taken for stats collection. I don't remember on top of my head, but its either 10% or 20% i.e. the tables where more than 10% (or 20%) data has changed are maked as stale. And only those stale objects will be considered for stats collection.
    Do you really think, each and every object/table has to be analyzed every day? How long does it take when you gather stats for all objects?

  • Do indices change, when data of table changes?

    Hello,
    if a table has got many indices and is changed frequently, do the indices change as well. Are all indices corresponding to that table changed. Are these indices processed in any way?
    thanks, resi

    "in indexes, the space can be reused only by rows with the same key-filed"
    That is incorrect. The space left in a index by a deleted row can be used by any key that is between the prior and next keys. For example, if you have Index entries
    Entry 1  Ball
    Entry 2  Bell
    Entry 3  BollNow, you delete Bell, the inndex would look like:
    Entry 1  Ball
    Entry 2  
    Entry 3  BollNow, add Bill, and the index would look like:
    Entry 1  Ball
    Entry 2  Bill
    Entry 3  Bollno wasted space.
    Rebuilding indexes is a waste of time and resources in the vast majority of cases.
    Search Tom Kyte's site here for rebuilding indexes or reorganizong indexes for many, often lively, discussions.
    John

  • Proper way to make bulk changes the Owner ID, Path and file share credentials for my existing subscriptions, ExtensionSettings

    We are going through with an upgrade/migration to SSRS 2012 and moving everything to a different domain. We have about 200 active subscriptions running, the reports are being delivered to a file share.  What is the correct way, in bulk, to change
    the OwnerId, the Path and the FileShare Username password credentials for these subscriptions?  I see these values are being stored in Subscriptions > ExtensionSettings.  I see that the file share path and Owner wouldn't be a problem to change,
    but since I see the file share credentials are encrypted I would not be able change them directly in ExtensionSettings.  Anyone know the proper way to change the Owner ID, Path and file share credentials for my existing subscriptions without having to
    change each one of them manually in the report manager?
    Note: Reporting Services Native upgrade from SSRS 2005 to SSRS 2012.
    Thanks in advance.

    Hi Cygnus46,
    Based on my understanding, you want to change the Owner ID, Path and file share credentials for all existing subscriptions.
    In Reporting Services, the subscription information are stored in the Report Server database. In your scenario, you can go to report server database and run the query to list all the subscriptions, then modify the owner and fileshare paths in the subscriptions
    table. For more information, please refer to this article: Tip: Change the Owner of SQL Reporting Services Subscription. If you want to change
    the file share credentials for subscriptions, you can run the query provided by
    wiperzeus from this similar thread:
    Windows File Share Delivery/ SSRS 2008 R2.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Table changes in database are not captured in ODI model level

    Hi All,
    Can any one help me how to fix the bug in ODI.
    Table changes in database are not captured in ODI model level.
    Thanks in advance

    I created the interface which is running successfully.
    Now i did some changes in target table(data base level).
    I reversed the updated table in model section. Till here its ok
    The table which is updated in the model section is not automatically updated in the interface.
    I have to drop the existed datastore in the interface and and re do the entire process of bringing the updated datastore, mapping,etc..
    Please tell the any alternate way.
    Regards
    suresh

Maybe you are looking for