Streams on materialized view table vs. local table
We are in a situation where we temporarily need to implement Streams on several materialized view tables. During development and testing I've noted that a local table with streams implemented on it yields 50% faster performance on apply than the materialized view tables. Can anyone tell me (1) why this is, it doesn't make sense since data is being retrieved from a buffered queue not the tables, and (2) a work around to this to improve performance on the mv tables. Any hellp would be appreciated.
Can't give you an answer why. I would suggest that you try (1) creating the materialzed views on prebuilt tables and (2) add parallelism to the apply process(es)
Similar Messages
-
Creation of materialized view from remote linked table
Hi ,
I am facing problem in creating a materialized view which is based on remote link and my query is involving one equi-join.And both table contributes around 2.75 crore rows. I am trying to create two diff views(MV) but the views are taking very much time to create. If you have any ideas or suggestions.And also I want performance I cant compromise it,so help. Please post it down.
Thanks,user13104802 wrote:
Hi ,
I am facing problem in creating a materialized view which is based on remote link and my query is involving one equi-join.And both table contributes around 2.75 crore rows. I am trying to create two diff views(MV) but the views are taking very much time to create. If you have any ideas or suggestions.And also I want performance I cant compromise it,so help. Please post it down.
Thanks,Welcome to the forum.
You will need to provide more information if you are interested in getting an intelligent response to your post. It appears that you are creating 2 different MVs but the details of each are not provided, ie
Where do each of the source tables exist, ie local or remote?
How many rows in each?
How will the MVs be refreshed?
There are other considerations, ie competition for resources, processing power, network bandwidth, etc, etc.
If all of the source tables exist on the remote database then consider creating the MV there and create a local view across the db link, or possibly, create a MV on the remote server for a subset of the remote data and link to that MV locally. -
Create materialized view in SE referring table in EE
Hi DBAs,
I am trying to create a Materialized view on Oracle 10g Standard Edition which is installed on Linux Debian Lenny.
The master table is on Oracle 10g Enterprise Edition installed on Red Hat Linux 4.1.2.
When I run,
create materialized view t1 build immediate refresh fast as (select * from aksharaemsmigrated.t1@db102)
i get,
ORA-12028: materialized view type is not supported by master site
When I run,
create materialized view t1 build immediate refresh fast on commit as (select * from aksharaemsmigrated.t1@db102);
I get,
ORA-01031: insufficient privileges
When I run,
select * from aksharaemsmigrated.t1@db102
I get,
the rows from aksharaemsmigrated.t1@db102
NOTE: 'db102' is remote DB on 10g EE for which I created db link.
I am able to create Mview for local table.
Is this not supported? I mean creating Mview on Oracle SE referring base table from Oracle EE.
I have to give solution soon. Can you please throw some light.
Regards,
VijayDon't put select statement inside parantheses.
Test these
create table x1 as select * from aksharaemsmigrated.t1@db102;
create materialized view t1 build immediate refresh fast as select * from aksharaemsmigrated.t1@db102 ;Note that you cannot create an MV that is REFRESH ON COMMIT when across Databases.
See my explanation at http://hemantoracledba.blogspot.com/2008/06/mvs-with-refresh-on-commit-cannot-be.html
Hemant K Chitale -
ORA-12096: error in materialized view log on a table
Hi while updating a table getting below error
ORA-12096: error in materialized view log on "FII"."FII_GL_JE_SUMMARY_B"
What might be the problem.
Thanks.might be table definition has changed, you may need to drop the mview log and recreate it.
-
Query on Materialized View created on Prebuilt Table
Hi,
I am trying to explain the scenario below on Materialized View which I want to setup for our database.
I have created a materialized view on prebuilt table on target side on primary key. Now for purging history data from target side which are more than 5 years old I have executed the following steps:
1. Dropped the materialized view which was created previously on prebuilt table in target side.
2. Deleted the data from that target table which are more than 5 years old.
3. Now created the materialized view again on the prebuilt table.
Now, the problem which I am facing is that - if any changes happens in the source side during the above 3 steps on the target side those changed records were not captured even after successfully building materialized view again in the target side (i.e. after completing step-3 of above the intermediate changed records were not captured).
Can you please let me know exactly what I am doing wrong here and how can I achieve my intended result.
Regards,
KoushikSee matelink for doc id *252246.1*. The document id says: A materialized view was defined on a Table A. This table had a referential constraint defined against another table, Table B. This constraint was defined as 'ON DELETE CASCADE'. An 'ON DELETE CASCADE' constraint is not allowed on views but as the constraint was created on the table underlying the materialized view, Table A, it could be created although it would behave as a constraint on the view. The constraint existed for performance reasons, which is permitted, and was disabled when created but a general script had been run to enable all constraints. When a delete was performed on Table B the error above was reported although there was no view created on Table B.
-
Questions on Materialized Views and MV Log tables
Hello all,
Have a few questions with regards to Materialized View.
1) Once the Materialized View reads the records from the MLOG table the MLOG's records get purged. correct? or is it not the case? In some cases I still see (old) records in the MLOG table even after the MV refresh.
2) How does the MLOG table distinguish between a read that comes from an MV and a read that comes from a user? If I manually execute
"select * from <MLOG table>" would the MLOG table's record get purged just the same way as it does after an MV refresh?
3) One of our MV refreshes hangs intermittently. Based on the wait events I noticed that it was doing a "db file sequential read" against the master table. Finally I had to terminate the refresh. I'm not sure why it was doing sequential read on the master table when it should be reading from the MLOG table. Any ideas?
4) I've seen "db file scattered read" (full table scan) in general against tables but I was surprised to see "db file sequential read" against the table. I thought sequential read normally happens against indexes. Has anyone noticed this behaviour?
Thanks for your time.1) Once all registered materialized views have read a particular row from a materialized view log, it is removed, yes. If there are multiple materialized views that depend on the same log, they would all need to refresh before it would be safe to remove the MV log entry. If one of the materialized views does a non-incremental refresh, there may be cases where the log doesn't get purged automatically.
2) No, your query wouldn't cause anything to be purged (though you wouldn't see anything interesting unless you happen to implement lots of code to parse the change vectors stored in the log). I don't know that the exact mechanism that Oracle uses has been published, though you could go through and trace a session to get an idea of the moving pieces. From a practical standpoint, you just need to know that when you create a fast-refreshable materialized view, it's going to register itself as being interested in particular MV logs.
3) It would depend on what is stored in the MV log. The refresh process may need to grab particular columns from the table if your log is just storing the fact that data for a particular key changed. You can specify when you create a materialized view log that you want to store particular columns or to include new values (with the INCLUDING NEW VALUES) clause. That may be beneficial (or necessary) to the fast refresh process but it would tend to increase the storage space for the materialized view log and to increase the cost of maintianing the materialized view log.
4) Sequential reads against a table are perfectly normal-- it just implies that someone is looking at a particular block in the table (i.e. looking up a row in the table by ROWID based on the ROWID in an index or in a materialized view log).
Justin -
Materialized view or a big table
Hi all,
In our project based on Documentum we are expecting to store 140 million of invoices of third party consultant companies in our Oracle 10g database. We received two different approaches from data modeling team.
First is in order to speed up the queries to “denormolize” the data and store to the one table which will have 140 million of records with the total length 403 bytes or to split it in two tables Company and Invoice.
In this case the Company table will contain 2000000 of records X 116 bytes and Invoice table 140 million of records X 171 byte. They also suggested creating the materialized view based on those two tables to take advantage of “fast refresh” feature.
I don’t have any experience with materialized views and my question is what is which of those approaches is the best
Any suggestion will be much appreciated.
Thanks in advance,
AlexI'd say that it mostly depends on other requirements at least including:
1) how many changes (inserts/updates) do you expect? Especially in spike hours.
2) how fast you'd like to see changes in the Mat view if you choose this approach? ON COMMIT? On regular interval basis? If on commit then if you predict to have problems with performance in spike hours (see above) then you'll have even more problems because on commit is rather expensive in terms of total statements executed (see tables under Bright idea – materialized views in http://www.gplivna.eu/papers/mat_views_search.htm)
3) how big is the possibility and will you have a requirement at all to update existing companie's data in already registered invoices?
4) What kind of queries you'll have and do at least some of them benefit from 2 table approach i.e. could be satisfied only with companies table?
Gints Plivna
http://www.gplivna.eu -
Materialized view on a Partitioned Table (Data through Exchange Partition)
Hello,
We have a scenario to create a MV on a Partitioned Table which get data through Exchange Partition strategy. Obviously, with exchange partition snap shot logs are not updated and FAST refreshes are not possible. Also, this partitioned table being a 450 million rows, COMPLETE refresh is not an option for us.
I would like to know the alternatives for this approach,
Any suggestions would be appreciated,
thank youFrom your post it seems that you are trying to create a fast refresh view (as you are creating MV log). There are limitations on fast refresh which is documented in the oracle documentation.
http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028
If you are not planning to do a fast refresh then as already mentioned by Solomon it is a valid approach used in multiple scenarios.
Thanks,
Jayadeep -
Problems with materialized view and fk between two db
Hi,
i have two databases db1 and db2 and from db1 is a table(DM_MESSDATEN) which contains a foreign key to a table(DM_FAUNA) in db2.
Now I want to write my done steps to get more clearification and hopefully someone can point out my wrong steps.
1st
i create the tables inside db1 without a foreign key to the table in db2.
2nd
i create a database link inside db1 to db2
create public database link DATENBANK2 connect to phantomas identified by bachelor06 using 'DMDB2';
3rd
now and here i stuck want to create a materialized view inside db1
create materialized view DATAMART_MVW AS
select * from DM_MESSDATEN, DM_FAUNA@DATENBANK2
where DM_MESSDATEN.FAUNA_ID=DM_FAUNA.FAUNA_ID;
or should the view be created inside of db2?
4th
and then i want to reactivate the foreign key inside the table of db1- but can't because of
problems in step 3 :(
So it would be nice if someone could help me
thanks a lot
thomasI think you haven't been clear in your statement of your problem:
now and here i stuck want to create a materialized view inside db1Why are you stuck? If you want to enforce a froeign key locally using data from a remote database then you need to build a materialized view on your local table tablespace that sucks data across from the remote database. You can then create a foreign key on your local table using the local MV.
If the remote table is updated frequently and you want the local MV kept in sync then you will need to put soem further replication in place. For instance you may need to create a materilaized view log on the remote database.
Cheers, APC -
Materialized View UNION different tables 10g.
I am trying to create a materialized view from 2 different tables. According the documentation for 10G it should be available.
Here is my script.
DROP MATERIALIZED VIEW PERSON_MV_T16;
CREATE MATERIALIZED VIEW PERSON_MV_T16 refresh complete on demand
AS
SELECT
CAST(P.MARKER AS VARCHAR2(4)) AS MARKER,
P.ROWID P_ROW_ID,
CAST(P.ACTIVE_IND_DT AS DATE) AS ACTIVE_IND_DT
FROM PERSON_ORGS_APEX_MV P
UNION
SELECT
CAST(P.MARKER AS VARCHAR2(4)) AS MARKER,
P.ROWID P_ROW_ID,
CAST(P.ACTIVE_IND_DT AS DATE) AS ACTIVE_IND_DT
FROM PERSON_ORGS_APVX_MV P
delete from mv_capabilities_table;
begin
dbms_mview.explain_mview('PEOPLE.PERSON_MV_T16');
end;
select *
from mv_capabilities_table where capability_name not like '%PCT%' and capability_name = 'REFRESH_FAST_AFTER_INSERT';
I get the following error.
CAPABILITY_NAME = REFRESH_FAST_AFTER_INSERT
POSSIBLE = N
MSGTEXT = tables must be identical across the UNION operator
I wrapped them in CAST operations just to be sure they are the same type and size.As far as I'm aware, you can create MV in standard and also there is no limitation that I'm aware off
Standard and Enterprise Edition
A. Basic replication (MV replication)
\- transaction based
- row-level
- asynchronous from master table to MV (Materialized View)
- DML replication only
- database 7 / 8.0 / 8i / 9i / 10g
Variants:
1. Read-only MV replication
2. Updateable MV replication:
2.1 asynchronous from MV to master
2.2 synchronous from MV to master
3. Writeable MV replication
Enterprise Edition only
B. Multimaster replication
\- transaction based
- row-level or procedural
- asynchronous or synchronous
- DML and DDL replication
- database 7 / 8.0 / 8i / 9i / 10g
- Enterprise Edition only
Variants:
1. row-level asynchronous replication
2. row-level synchronous replication
3. procedural asynchronous replication
4. procedural synchronous replication
C. Streams replication
(Standard Edition 10g can execute Apply process)
\- (redo) log based
- row-level
- asynchronous
- DML and DDL replication
- database 9i / 10g (10g has Down Streams Capture) -
Leave a distinct value in a materialized view on two tables
Hi and thank you for reading,
I have the following problem. I am creating a materialized view out of two tables, with "where a.id = b.id".
The resulting materialized view list several values twice. For example, one customer name has several contact details and thus the customer name is listed several times. Now I would like to join each customer name with just ONE contact detail, how can I do that? (Even if I would loose some information while doing this).
Thanks
EvgenyHi,
You can do this
SELECT deptno, empno, ename, job, mgr, hiredate, sal, comm
FROM emp_test
ORDER BY deptno;
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM
10 7782 CLARK MANAGER 7839 1981-06-09 2450
10 7839 KING PRESIDENT 1981-11-17 5000 0
10 7934 MILLER CLERK 7782 1982-01-23 1300
20 7566 JONES MANAGER 7839 1981-04-02 2975
20 7902 FORD ANALYST 7566 1981-12-03 3000
20 7876 ADAMS CLERK 7788 1987-05-23 1100
20 7369 SMITH CLERK 7902 1980-12-17 800
20 7788 SCOTT ANALYST 7566 1987-04-19 3000
30 7521 WARD SALESMAN 7698 1981-02-22 1250 500
30 7844 TURNER SALESMAN 7698 1981-09-08 1500
30 7499 ALLEN SALESMAN 7698 1981-02-20 1600 300
30 7900 JAMES CLERK 7698 1981-12-03 950
30 7698 BLAKE MANAGER 7839 1981-05-01 2850
30 7654 MARTIN SALESMAN 7698 1981-09-28 1250 1400
14 rows selected.
SELECT CASE
WHEN ROW_NUMBER () OVER (PARTITION BY deptno ORDER BY empno) =
1
THEN deptno
END deptno,
empno, ename, job, mgr, hiredate, sal, comm
FROM emp_test;
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM
10 7782 CLARK MANAGER 7839 1981-06-09 2450
7839 KING PRESIDENT 1981-11-17 5000 0
7934 MILLER CLERK 7782 1982-01-23 1300
20 7369 SMITH CLERK 7902 1980-12-17 800
7566 JONES MANAGER 7839 1981-04-02 2975
7788 SCOTT ANALYST 7566 1987-04-19 3000
7876 ADAMS CLERK 7788 1987-05-23 1100
7902 FORD ANALYST 7566 1981-12-03 3000
30 7499 ALLEN SALESMAN 7698 1981-02-20 1600 300
7521 WARD SALESMAN 7698 1981-02-22 1250 500
7654 MARTIN SALESMAN 7698 1981-09-28 1250 1400
7698 BLAKE MANAGER 7839 1981-05-01 2850
7844 TURNER SALESMAN 7698 1981-09-08 1500
7900 JAMES CLERK 7698 1981-12-03 950
14 rows selected.Edited by: Salim Chelabi on 2009-09-14 08:13 -
Dear All,
I have created a materialized view which refreshes on commit.materialized view is enabled query rewrite.I have created a materialized view log on the base table also While inserting into the base table it takes lot of time................Can u please tell me why?Dear Rahul,
Here is my materialized view..........
create materialized view mv_test on prebuilt table refresh force on commit
enable query rewrite as
SELECT P.PID,
SUM(HH_REGD) AS HH_REGD,
SUM(INPRO_WORKS) AS INPRO_WORKS,
SUM(COMP_WORKS) AS COMP_WORKS,
SUM(SKILL_WAGE) AS SKILL_WAGE,
SUM(UN_SKILL_WAGE) AS UN_SKILL_WAGE,
SUM(WAGE_ADVANCE) AS WAGE_ADVANCE,
SUM(MAT_AMT) AS MAT_AMT,
SUM(DAYS) AS DAYS,
P.INYYYYMM,P.FIN_YEAR
FROM PROG_MONTHLY P
WHERE SUBSTR(PID,5,2)<>'PP'
GROUP BY PID,P.INYYYYMM,P.FIN_YEAR;
Please help me if query enable rewrite does any performance degradation......
Thanks & Regards
Kris -
Materialized views on prebuilt tables - query rewrite
Hi Everyone,
I am currently counting on implementing the query rewrite functionality via materialized views to leverage existing aggregated tables.
Goal*: to use aggregate-awareness for our queries
How*: by creating views on existing aggregates loaded via ETL (+CREATE MATERIALIZED VIEW xxx on ON PREBUILT TABLE ENABLE QUERY REWRITE+)
Advantage*: leverage oracle functionalities + render logical model simpler (no aggregates)
Disadvantage*: existing ETL's need to be written as SQL in view creation statement --> aggregation rule exists twice (once on db, once in ETL)
Issue*: Certain ETL's are quite complex via lookups, functions, ... --> might create overy complex SQLs in view creation statements
My question: is there a way around the issue described? (I'm assuming the SQL in the view creation is necessary for oracle to know when an aggregate can be used)
Best practices & shared experiences are welcome as well of course
Kind regards,
Peterstreefpo wrote:
I'm still in the process of testing, but the drops should not be necessary.
Remember: The materialized view is nothing but a definition - the table itself continues to exist as before.
So as long as the definition doesn't change (added column, changed calculation, ...), the materialized view doesn't need to be re-created. (as the data is not maintained by Oracle)Thanks for reminding me but if you find a documented approach I will be waiting because this was the basis of my argument from the beginning.
SQL> select * from v$version ;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> desc employees
Name Null? Type
EMPLOYEE_ID NOT NULL NUMBER(6)
FIRST_NAME VARCHAR2(20)
LAST_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
COMMISSION_PCT NUMBER(2,2)
MANAGER_ID NUMBER(6)
DEPARTMENT_ID NUMBER(4)
SQL> select count(*) from employees ;
COUNT(*)
107
SQL> create table mv_table nologging as select department_id, sum(salary) as totalsal from employees group by department_id ;
Table created.
SQL> desc mv_table
Name Null? Type
DEPARTMENT_ID NUMBER(4)
TOTALSAL NUMBER
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> create materialized view mv_table on prebuilt table with reduced precision enable query rewrite as select department_id, sum(salary) as totalsal from employees group by department_id ;
Materialized view created.
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> select object_name, object_type from user_objects where object_name = 'MV_TABLE' ;
OBJECT_NAME OBJECT_TYPE
MV_TABLE TABLE
MV_TABLE MATERIALIZED VIEW
SQL> insert into mv_table values (999, 100) ;
insert into mv_table values (999, 100)
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> update mv_table set totalsal = totalsal * 1.1 where department_id = 10 ;
update mv_table set totalsal = totalsal * 1.1 where department_id = 10
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> delete from mv_table where totalsal <= 10000 ;
delete from mv_table where totalsal <= 10000
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view While investigating for this thread I actually made my own question redundant as the answer became gradually clear:
When using complex ETL's, I just need to make sure the complexity is located in the ETL loading the detailed table, not the aggregate
I'll try to clarify through an example:
- A detailed Table DET_SALES exists with Sales per Day, Store & Product
- An aggregated table AGG_SALES_MM exists with Sales, SalesStore per Month, Store & Product
- An ETL exists to load AGG_SALES_MM where Sales = SUM(Sales) & SalesStore = (SUM(Sales) Across Store)
--> i.e. the SalesStore measure will be derived out of a lookup
- A (Prebuilt) Materialized View will exist with the same column definitions as the ETL
--> to allow query-rewrite to know when to access the table
My concern was how to include the SalesStore in the materialized view definition (--> complex SQL!)
--> I should actually include SalesStore in the DET_SALES table, thus:
- including the 'Across Store' function in the detailed ETL
- rendering my Aggregation ETL into a simple GROUP BY
- rendering my materialized view definition into a simple GROUP BY as wellNot sure how close your example is to your actual problem. Also don't know if you are doing an incremental/complete data load and the data volume.
But the "SalesStore = (SUM(Sales) Across Store)" can be derived from the aggregated MV using analytical function. One can just create a normal view on top of MV for querying. It is hard to believe that aggregating in detail table during ETL load is the best approach but what do I know? -
Will Materialized view log reduces the performance of DML statements on the master table
Hi all,
I need to refresh a on demand fast refresh Materialized view in Oracle 11GR2. For this purpose I created a Materialized view log on the table (Non partitioned) in which records will be inserted @ rate of 5000/day as follows.
CREATE MATERIALIZED VIEW LOG ON NOTES NOLOGGING WITH PRIMARY KEY INCLUDING NEW VALUES;
This table already has 20L records and adding this Mview log will reduce the DML performance on the table ?
Please guide me on this.Having the base table maintain a materialised view log will have an impact on the speed of DML statements - they are doing extra work, which will take extra time. A more sensible question would be to ask whether it will have a significant impact, to which the answer is almost certainly "no".
5000 records inserted a day is nothing. Adding a view log to the heap really shouldn't cause any trouble at all - but ultimately only your own testing can establish that. -
Table Operator Vs Materialized View Operator
Hi All,
Could you please give the differences between Table Operator and Materialized view Operator in Oracle Warehouse Builder 11g.
Regards,
SubbuBelow an extract of my notes of the Materialized view. The complete notes are here :
http://gerardnico.com/wiki/dw/aggregate_table
=====Notes=====
Materialized views are the equivalent of a summary table. (Materialized views can be also use as replica).
In a olap approach, each of the elements of a dimension could be summarized using a hierarchy.
The end user queries the tables and views in the database. The query rewrite mechanism in a database automatically rewrites the SQL query to use this summary tables.
This mechanism reduces response time for returning results from the query. Materialized views within the data warehouse are transparent to the end user or to the database application.
This is relatively straightforward and is answered in a single word - performance. By calculating the answers to the really hard questions up front (and once only), we will greatly reduce the load on the machine, We will experience:
* Less physical reads - There is less data to scan through.
* Less writes - We will not be sorting/aggregating as frequently.
* Decreased CPU consumption - We will not be calculating aggregates and functions on the data, as we will have already done that.
* Markedly faster response times - Our queries will return incredibly quickly when a summary is used, as opposed to the details. This will be a function of the amount of work we can avoid by using the materialized view, but many orders of magnitude is not out of the question.
Materialized views will increase your need for one resource - more permanently allocated disk. We need extra storage space to accommodate the materialized views, of course, but for the price of a little extra disk space, we can reap a lot of benefit.
Also notice that we may have created a materialized view, but when we ANALYZE, we are analyzing a table. A materialized view creates a real table, and this table may be indexed, analyzed, and so on.
Success
Nico
Maybe you are looking for
-
forumnotifierNewsgroup_User@
-
Table Editor not working with reference
Java work on references right.. I have a static Vector in my main class. Then i have a JTable in one of the subsequent classes which will come up on one of the action performed events. now this JTable has a DefaultCellEditor which here is a JComboBox
-
File adapter - change of encoding in the header of the xml message
Hello! I would like to change the encoding in the header of an xml message (sent to a a receiver file adapter) from <?xml version="1.0" encoding="UTF-8" ?> to <?xml version="1.0" encoding="ISO-8859-1" ?> We have XI 3.0 with SP15. Can anybody help me
-
Pc connected behind 7942g network issues
All I have a situation where computers connected directly to a 7942 ip phone see a large reduction in network speed. If I remove the phone out of the equation and have the computer connected directly to the Ethernet network downloads become 20 times
-
Duplicate File IDs showing up in multiple post offices
We have come across a strange issue. Several users are missing their file IDs. We looked at the user databases and found duplicate file IDs spread across multiple post offices. Joe Smith might have file ID aba in Post Office One while Bob Jones would