Best approach in populating a custom column thru DAC

our system is in production, and as part of small enhancement adding a custom column in informatica, and now client is not allowing to run the full load, we run incremental everyday so what is the best approch to populate the column?
one option i can see that going to DAC setup tab and make the refresh date null for that perticular table and run the incremental load, which makes the load run full load for that table and rest run for the incremental load, since its a fact table it fine but it not gonna work in dimension table right ?
please provide me your valuable suggestions

if you setup null to refresh dates, then DAC will execute full etl. so in case of dimension after reload rowwids will change and fact table join will not work, unless you reload fact tables also.
workaround for this is find out the first record creation date of specific dimension table in source and enter that date in refresh date column. this would run incr etl.

Similar Messages

  • Best approach to add Z custom field to IC Agent Inbox search and results view

    Hi Experts,
    We are having a requirement to add a Z custom field to IC Agent Inbox search and results view. I got multiple forums and ideas, but looking for the best approach for handling this. I am sure, you experts, would have already done this.
    Thanks in advance.
    Regards
    Siva

    Hi Sivakumar,
    AET is the best way by far to create a custom field in this area. It is easy and simple.
    Also, field once added in one business object it can be used at different objects as well.
    There is also a demo available for AET on sdn.
    Please let me know if any more help is required.
    Thanks,
    Bhushan

  • Best approach to write a custom TableCell ?,Boolean

    Hi!
    I'm trying to write a custom TableCell that will display a checkbox both in render and edit mode.My class is shown below:
    package work.with.ui.controls;
    import javafx.event.EventHandler;
    import javafx.scene.control.CheckBox;
    import javafx.scene.control.TableCell;
    import javafx.scene.input.MouseEvent;
    * Person is a class that has a BooleanProperty in it called sex (The checkbox is selected for Male and unchecked for Female)
    class CheckBoxCell extends TableCell<Person, Boolean> {
         private CheckBox checkBox;
         public CheckBoxCell() {
              setText(null);
              checkBox = new CheckBox();
              setGraphic(checkBox);
              checkBox.setOnMouseClicked(new EventHandler<MouseEvent>() {
                   public void handle(MouseEvent event) {
                        Person person = (Person) getTableRow().getItem();
                        person.setSex(checkBox.isSelected());
              setOnMouseClicked(new EventHandler<MouseEvent>() {
                   @Override
                   public void handle(MouseEvent event) {
                        // override to disable entering in unnecessary edit mode when
                        // dbl clicked
         @Override
         protected void updateItem(Boolean item, boolean empty) {
              super.updateItem(item, empty);
              if (empty) {
                   setGraphic(null);
              } else {
                   checkBox.setSelected(item);
    }The code is working well but my question is: is it a good approach to modify the data in the observable collection since I have available startEdit,commitEdit,isEditing,etc? Personally I find a bit verbose the example from the TableView tutorial on JavaFX documetation site and such I didn't undertand it fully. If you have a better suggestion regarding the writing of this TableCell please tell me about it.
    Thank you!

    we use this approach worked very well, much like the TextFieldTableCell using enter to 'commit' and esc to cancel the editing, but with a difference we use a custom component as graphic editing, this component that returns a Number as a result but this can be adapted to various situations:
    import br.com.valesys.fxutil.control.DecimalTextField;
    import br.com.valesys.fxutil.converters.NumberStringConverter;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javafx.beans.property.ObjectProperty;
    import javafx.beans.property.SimpleObjectProperty;
    import javafx.event.EventHandler;
    import javafx.scene.control.TableCell;
    import javafx.scene.input.KeyCode;
    import javafx.scene.input.KeyEvent;
    * @author Alex
    public class DecimalTextFieldTableCell<S extends Object, T extends Number> extends TableCell<S, T> {
        private DecimalTextField decimalTextField;
        private ObjectProperty converter = new SimpleObjectProperty();
        public DecimalTextFieldTableCell(NumberStringConverter converter) {
            setConverter(converter);
        public final ObjectProperty converterProperty() {
            return converter;
        public final void setConverter(NumberStringConverter stringconverter) {
            converterProperty().set(stringconverter);
        public final NumberStringConverter getConverter() {
            return (NumberStringConverter) converterProperty().get();
        @Override
        public void startEdit() {
            if (!isEditable() || !getTableView().isEditable() || !getTableColumn().isEditable()) {
                //retorna
            } else {
                super.startEdit();
                if (decimalTextField == null) {
                    decimalTextField = new DecimalTextField(getConverter().fromString(getItemText()));
                    decimalTextField.setOnKeyReleased(new EventHandler<KeyEvent>() {
                        @Override
                        public void handle(KeyEvent t) {
                            if (t.getCode().equals(KeyCode.ENTER)) {
                                if (getConverter() == null) {
                                    try {
                                        throw new IllegalAccessException("Tentando converter valor com StringConverter nulo.");
                                    } catch (IllegalAccessException ex) {
                                        Logger.getLogger(DecimalTextFieldTableCell.class.getName()).log(Level.SEVERE, null, ex);
                                try {
                                    commitEdit((T) getConverter().fromString(decimalTextField.getText()));
                                } catch (RuntimeException ex) {
                            } else if (t.getCode().equals(KeyCode.ESCAPE)) {
                                cancelEdit();
                decimalTextField.setText(getItemText());
                setText(null);
                setGraphic(decimalTextField);
                decimalTextField.selectAll();
        @Override
        public void cancelEdit() {
            super.cancelEdit();
            setText(getItemText());
            setGraphic(null);
        @Override
        protected void updateItem(T t, boolean bln) {
            super.updateItem(t, bln);
            if (isEmpty()) {
                setText(null);
                setGraphic(null);
            } else if (isEditing()) {
                if (decimalTextField != null) {
                    decimalTextField.setText(getItemText());
                setText(null);
                setGraphic(decimalTextField);
            } else {
                setText(getItemText());
                setGraphic(null);
        private String getItemText() {
            return getConverter() != null ? getConverter().toString(getItem()) : getItem() != null ? getItem().toString() : "";
    Sorry for the mistakes in English, I do not speak your language very well.

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • Best approach for performing DMLs using stored procedures

    Hi,
    I have a really general question and would like to hear your say about this.
    I want my application to manipulate or read data using stored procedures (or packages in that manner) and not directly using queries against the DB.
    Let's say I have a table with many columns:
    create table test (pkid number(10),col1 varchar2(30), col2 number(10), col3 date,...);For such a DML procedure, is it best to do something like
    procedure do_update(i_pkid IN number,i_col1 IN varchar2, i_col2 IN number, i_col3 IN date,...) as
    begin
       update test
       set col1=i_col1,
            col2=i_col2,
            col3=i_col3...
       where pkid=i_pkid;
       commit;
    end;Or do a selective update, meaning update only a certain column every time, given only 1 column actually changes? (and how to do that - separate procedures for each column? [columns can be nulls])
    Also, is it better to work with test.col1%type instead of specifying the data type in the procedure?
    And one last question - If I have a table with 100 columns and I don't want to create a procedure with 100 parameters - the best approach would be to use a record?
    I just need to be set on the way I start implementing things in order to do it well from the start.
    Many thanks.
    Edited by: Pyrocks on Nov 17, 2010 1:58 PM

    Pyrocks wrote:
    One last clarification (although it may be more related to c/c++ developers - maybe one of you will know):
    We are working with C++ and VB against a SQLServer and my part is to translate all the existing procedures to Oracle in order to migrate the application to work with Oracle DB.
    The existing procedures use an IN parameter for each column in the table and I would really like to use rowtype like you mentioned.
    Since I'm not a c/c++/vb developer - is there an easy way of working with such types, or even User-Defined Types, from c/c++/vb (as in passing a rowtype record from c++ to a SP ?)
    I'm not looking for the actual way - just want to know how hard it would be and how much code needs to be changed in order to be able to convince the developers that this is the RIGHT way to work.
    PS. none of our developers have experience with ORACLE so they wouldn't know the answer...Not actually an Oracle server-side (SQL language or PL/SQL language) question - but a client one. And it has been a long time since I wrote a fat client using C/C++ or Delphi.
    The OCI (<i>Oracle Call Interface</i>) supports advance (user defined) SQL data types. Has since Oracle 8i. So in that respect, yes the client can support custom SQL data types.
    How well it does depends entirely on that client language's features wrt OCI integration. For example, Delphi 4 was release around Oracle 8i and supported custom SQL types. I would expect that most languages today (like Java and C#) will provide support for it.
    As for usiong +%ROWTYPE+ - this is a PL/SQL clause as far as I know. Unsure whether it is supported by the OCI. What could support it is a pre-compiler like Pro*C. These enable you to mix pseudo SQL source code with client language source code. The pre-compilation step then replaces the pseudo SQL code with native client language calls to the OCI. The code is then compiled by that client language's compiler. Pre-compilers can pull all kinds of interesting "tricks" with their pseudo SQL code support.
    The best would be to consult the applicable client language's manuals that describe the interface it supports (via OCI) to Oracle.

  • Do u know the best approach with data....?

    I am considering the best approach for returning a resultset from a ejb to my jsp page but I dont know which approach is the best. You comment PLEASE. (As resultset cannot be serialized so returning it directly wont be considered).
    Approach A � Make a custom class having get/set variables to represent each column values in the resultset, and use the class in jsp. However, I find this tedious because whenever I add to the select statement, I have to add class variables too.
    Approach B � Manually manipulate data in resultset and put into a vector then return the vector to jsp
    Approach C � use rowsets instead and return the rowset to jsp.
    Many thanks u all...

    Hello,
    Approach A is not recommended - you would have to leave the resultset open and so leave the connection the the database open.
    Approach B is better
    Approach C - well RowSets are a new thing in 1.4 which I have not tried yet. They look useful, but is your app running on 1.4?

  • LOV of column names with a report's custom column headings?

    I have a list ov values definition that looks like this:
    select column_name d, column_name r from all_tab_columns where table_name = 'DATABASE_LIST'
    I'd like to list the custom column headings from a report as d, rather than repeating the column_name. How can I do this?

    As Anton said, the best thing is to store your custom headings in a table so that you can use the table for your LOV as well as for your report headings.
    To use dynamic report headings, you can use the 'PL/SQL function body returning colon-delimited headings' feature on the Report Attributes page.
    So, if your report headings are stored in table t that function body can be
    declare
    l_headings varchar2(4000)
    begin
    for rec in (select heading from t) loop
       l_headings := l_headings||':'||rec.heading;
    end loop;
    return ltrim(l_heading,':');
    end;Hope this helps.

  • R/3 4.7 to ECC 6.0 Upgrade - Best Approach?

    Hi,
    We have to upgrade R/3 4.7 to ECC 6.0
    We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
    We have been considering the following approaches (but need to understand the risk for each approach).
    1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
    I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
    2) DB in 1st step and then Unicode and R/3 together after 2-3 months
    Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
    3) DB and R/3 in 1st step and then Unicode much later
    We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
    We are on single code pages 1100 and data base size is about 2 tera bytes
    Thanks in advance
    Regards
    Rahul

    Hi Rahul
    regarding your 'Unicode doubt"' some ideas:
    1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
    2) In Unicode@SAP can you find several (content-mighty) FAQs
    Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
    Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
    "...The Unicode conversion depends on the customer situation....
    ... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
    - If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
    In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
    => Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
    3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
    Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
    regards, and HTH, Andreas R

  • Is there an easy view of assigned Retention Tags in main view of Messages, e.g. a custom column?

    Hi,
    I have a requiement to have an easy/overall view of assigned Retention Policy tags against a list of messages in outlook 2010.
    I envisage that if there was a column in the main message list view, that showed the assigned tag against each message and that could be sorted, e.g. so I can see what has an assigned tag (and what it is) and what doesn't, this would hit the mark perfectly.
    Any such capability?
    Can I create a custom column that extracts a message property to do this?
    If so, how?
    Thanks! David.

    Hi Greg,
    Sorry it's been a couple of days, it's been busy.
    Unfortunately not. I raised it with our devs to take a look at. I was initially after an assessment of effort. But they were quoting me a number of days effort just to get to the point of being able to tell me how long it would take. At that point I
    pulled it, as it wasn't a must-have requirement.
    Still, it would have been nice.
    To extract such a property, I might imagine an Outlook plug-in to extract the data from the property and watchers to look out for changes in various circumstances. In this way if you can get the data into a new Custom Message Property, a Column
    will readily display it. It feels a little over-engineered and duplicates the data just to get it into a place to show it.
    Maybe there are easier ways. But no, I didn't make progress.
    All the best. Ta, /David.

  • How to set a custom column in a workflow task.

    Hello,
    I'm looking for some assistance a bit with how to set a custom column in a Workflow Task.
    I have a List Workflow that starts when an item is created in a list. The workflow, platform type SharePoint 2013, starts a new task, Task1, with Content Type 1. This Content Type has a custom column called Age. Once the Task1 is completed a new task, Task2,
    with Content Type 2, starts and has the same column Age, as Task1.
    How can I populate the Age column in Task2 with the content of the Age column in Task1?
    Since I start the task by running "Assign a task to ..." Action I was thinking to copy the Age column from the Task1 to the list item that started Task1, which has a column Age as well, and then in Task2 to start another workflow - which is associated
    with the Content Type 2,  that would try to read the Age column from the list item that started Task2, which was set once Task1 was competed - I know it's complex but this is how I was thinking. 
    The problem with this approach is that I can't get a reference to the list item that started Task2 to read the Age from the list item.
    Is there a better approach? I use SharePoint Designer 2013 to design all this.
    Any assistance is appreciated.
    Thank you.

    Hello Sebastian,
    you can get the Age column from Task 1 and then update the Task 2 Age column with that value. I am not sure why you want to run another workflow on Task 2.
    You can perform below steps to set Age column from Task 1 to Task 2.
    1.  Create Task 1 using Assign a task , wait till the task is completed.
    2. Get the Age column value based on Task 1 once the task is completed.
    3.Create Task 2 using Assign a task ,  uncheck wait till the task is completed option.
    4. Update the Task 2 with Age column in Task1.
    5. Use Wait for the field to equal value , check for Task Status is completed or not.
    >>The problem with this approach is that I can't get a reference to the list item that started Task2 to read the Age from the list item.
    you can get the related item from task list item to get the main list item.
    Other option is, Use Javascript and CSOM  in task edit form to get the Age column from Task1 and prepoluate the Age value when Task2 is opened.
    Hope this helps.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • How can i load data from access database to datagridview with custom columns all days of a month ?

    Hi guys
    I am newbie in vb net and I want your help to solve a problem.
    I have this datagridview with two columns and all days of a month in custom columns.
    [IMG]http://i59.tinypic.com/2qwpj15.png[/IMG]
    I also have one combobox to change Year and a combobox to change Month.
    Here is the code to load data
    Private Sub fill_plan()
    dgMonth.Rows.Clear()
    Try
    Dim i As Integer = 0
    Dim query As String = "SELECT MonID,Unitname,Personel,Udate FROM tblMonth ORDER BY Unitname"
    con.Open()
    cmd = New OleDbCommand(query, con)
    myDR = cmd.ExecuteReader
    If myDR.HasRows Then
    While myDR.Read
    dgMonth.Rows.Add()
    dgMonth.Rows(i).Cells(0).Value = myDR.GetInt32(myDR.GetOrdinal("MonID"))
    dgMonth.Rows(i).Cells(1).Value = myDR.GetString(myDR.GetOrdinal("Unitname"))
    dgMonth.Rows(i).Cells(2).Value = myDR.GetInt32(myDR.GetOrdinal("Personel"))
    i = i + 1
    End While
    End If
    myDR.Close() : con.Close()
    Catch ex As Exception
    MsgBox(ex.Message, MsgBoxStyle.Critical, "Error")
    End Try
    End Sub
    With
    this code the
    personel column
    loads the first
    day of the month.
    I want to load
    the column the date that is
    in the database.

    Hello,
    This can be done with less code
    Private Sub fill_plan()
    dgMonth.DataSource = Nothing
    Dim dt As New DataTable
    Try
    Dim query As String = "SELECT MonID,Unitname,Personel,Udate FROM tblMonth ORDER BY Unitname"
    con.Open()
    cmd = New OleDbCommand(query, con)
    dt.Load(cmd.ExecuteReader)
    dgMonth.DataSource = dt
    Catch ex As Exception
    MsgBox(ex.Message, MsgBoxStyle.Critical, "Error")
    End Try
    End Sub
    The above loads all rows, if you want to limit the rows placed in the DataGridView this is best done in the SQL via WHERE conditions and/or with SELECT TOP x.
    Formatting of the data is best done via the property window for the DataGridView on whatever column you want too. Using the above you now need to set the data property for each column and set dgMonth.AutoGenerateColumns = False, in the end we end up with
    less code
    edit is there a reason for returning the primary key? If so then using my method we can hide that field but I see no reason for having it in this case
    Please remember to mark the replies as answers if they help and unmark them if they provide no help, this will help others who are looking for solutions to the same or similar problem.

  • Office 365 - sharepoint: In Advanced search Page, How to add custom column under property restrictions

     In Advanced search Page, How to add custom column under property restrictions?

    Hi,
    The Navigation control can be added into your HTML page in the Snippet Gallery:
    The two links below about how to create HTML master page and adding snippets needed into it for your reference:
    http://borderingdotnet.blogspot.jp/2012/12/how-to-create-html-masterpage-for.html
    http://msdn.microsoft.com/en-us/library/office/jj822370(v=office.15).aspx
    Feel free to reply if there still any question.
    Best regards,
    Patrick
    Patrick Liang
    TechNet Community Support

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

Maybe you are looking for

  • Unable to send emails from any Apple devices

    Since BT made the changes to Yahoo mail i have had problems with sending emails from my ipad and iphone (IOS8). The only way that i can successfully send an email is to log onto my laptop and send from a windows platform. I have checked all of the se

  • My ipad won't charge with the wall charger

    I have been able to charge my ipad and ipod using the wall charger. Now when I plug my ipad in, it won't charge. It also doesn't charge when plugged into the computer.

  • Geolocation for Android With Adobe Content Viewer

    Hello guys, I'm working at a project using HTML5 features to turn it an interesting application, but, when I use the geolocation with Google Maps anything happen :/ do you know why it doesn't working? Thanks so much.

  • Publish error : Can't create the file "Typesetter Header3.jpg."

    when i publish my website an error message comes up and says Can't create the file "Typesetter Header3.jpg." The disk may be damaged or full, or you may not have sufficient access privileges. anyone have any idea?

  • Site Looks Different in Firefox ?

    Title says it all - the text, pictures are all different from what it looks like in Internet Explorer. Is there a way to fix it so it looks alright in ALL browsers?