Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 20. Отображено 20.
26-04-2011 дата публикации

Parameter-sensitive plans for structural scenarios

Номер: US0007933894B2

Systems and methods that generate specialized plans for compiling SQL queries. A plan generator component scans the query representation for parameter sensitive predicates and evaluates each predicate individually based on the parameter values. Accordingly, queries can be identified not only based on their structures, but also based on their parameter conditions. The specialized plans are more efficient for particular values, wherein queries that employ such values are optimally executed.

Подробнее
18-02-2003 дата публикации

Internet database system

Номер: US0006523036B1

An incrementally-scalable database system and method. The system architecture enables database servers to be scaled by adding resources, such as additional servers, without requiring that the system be taken offline. Such scaling includes both adding one or more computer servers to a given server cluster, which enables an increase in database read transaction throughput, and adding one or more server clusters to the system configuration, which provides for increased read and write transaction throughput. The system also provides for load balancing read transactions across each server cluster, and load balancing write transactions across a plurality of server clusters. The system architecture includes an application server layer including one or more computers on which an application program(s) is running, a database server layer comprising two or more server clusters that each include two or more computer servers with replicated data, and an intermediate "virtual transaction" layer that ...

Подробнее
03-05-2012 дата публикации

PARTITIONING ONLINE DATABASES

Номер: US20120109892A1
Принадлежит: Microsoft Corporation

The present invention extends to methods, systems, and computer program products for partitioning online databases. Online database operations, such as, for example, SPLIT, MERGE, and DROP, are used to alter the arrangement of partitions in a federated database. A SPLIT operation splits rows at one partition across a plurality of other partitions. A MERGE operation merges rows at a plurality of partitions in to one partition. A DROP operation shifts responsibility for rows of data from one partition to another partition and then drops the rows from the one partition.

Подробнее
04-04-2006 дата публикации

Virtual file system

Номер: US0007024427B2
Принадлежит: EMC Corporation, EMC CORP, EMC CORPORATION

A virtual file system and method. The system architecture enables a plurality of underlying file systems running on various file servers to be "virtualized" into one or more "virtual volumes" that appear as a local file system to clients that access the virtual volumes. The system also enables the storage spaces of the underlying file systems to be aggregated into a single virtual storage space, which can be dynamically scaled by adding or removing file servers without taking any of the file systems offline and in a manner transparent to the clients. This functionality is enabled through a software "virtualization" filter on the client that intercepts file system requests and a virtual file system driver on each file server. The system also provides for load balancing file accesses by distributing files across the various file servers in the system, through migration of data files between servers.

Подробнее
06-01-2009 дата публикации

Scalable network file system

Номер: US0007475199B1
Принадлежит: EMC Corporation, EMC CORP, EMC CORPORATION

An incrementally-scalable file system and method. The system architecture enables file systems to be scaled by adding resources, such as additional filers and/or file servers, without requiring that the system be taken offline or being known to client applications. The system also provides for load balancing file accesses by distributing files across the various file storage resources in the system, as dictated by the relative capacities of said storage resources. The system provides one or more "virtual" file system volumes in a manner that makes it appear to client applications that all of the file system's storage space resides on the virtual volume(s), while in reality the files may be stored on many more physical volumes on the filers and/or file servers in the system.

Подробнее
03-05-2012 дата публикации

SCOPED DATABASE CONNECTIONS

Номер: US20120109926A1
Принадлежит: Microsoft Coropration

The present invention extends to methods, systems, and computer program products for scoping the context used to access a database partition. Embodiments of the invention enable data isolation using partitions in multi-tenant databases, while relieving client applications from dealing with the partitions. For example, a computer system that includes a distributed database system comprising a plurality of database partitions in a federation receives a context to use when performing database access operations within the distributed database system. The context identifies specified relevant portion of the federation. The computer system also receives a database access operation that is associated with the context. The computer system modifies the semantics of the database access operation in accordance with the associated context, to direct application of the database access operation to the specified relevant portion of the federation.

Подробнее
21-06-2016 дата публикации

Partitioning online databases

Номер: US0009372882B2

Methods, systems, and computer program products are provided for partitioning online databases. Online database operations, such as, for example, SPLIT, MERGE, and DROP, are used to alter the arrangement of partitions in a federated database. A SPLIT operation splits rows at one partition across a plurality of other partitions. A MERGE operation merges rows at a plurality of partitions in to one partition. A DROP operation shifts responsibility for rows of data from one partition to another partition and then drops the rows from the one partition.

Подробнее
06-10-2016 дата публикации

PARTITIONING ONLINE DATABASES

Номер: US20160292215A1
Принадлежит:

Methods, systems, and computer program products are provided for partitioning online databases. Online database operations, such as, for example, SPLIT, MERGE, and DROP, are used to alter the arrangement of partitions in a federated database. A SPLIT operation splits rows at one partition across a plurality of other partitions. A MERGE operation merges rows at a plurality of partitions in to one partition. A DROP operation shifts responsibility for rows of data from one partition to another partition and then drops the rows from the one partition. 1. A computing system , comprising:one or more processors; and receive a drop directive including a set of key values that each identify a different row of the distributed database that is to be dropped, the set of key values identifying: (i) one or more first rows selected from among a first subset of rows that are stored in a first partition of the distributed database, and (ii) one or more second rows selected from among a second subset of rows that are stored in a second partition of the distributed database; and', creating a third partition of the distributed database;', 'expanding the one or more first rows and the one or more second rows into the third partition; and', 'dropping the one or more first rows from the first partition and dropping the one or more second rows from the second partition., 'execute a drop operation to drop the one or more first rows and the rows one or more second rows, while the first and second partitions remain online, including], 'one or more computer-readable media having stored thereon computer-executable instructions that are executable by the one or more processors to cause the computing system to drop rows in a distributed database, the computer-executable instructions including instructions that are executable to cause the computing system to perform at least the following2. The computing system of claim 1 , the computer-executable instructions also including instructions that are ...

Подробнее
30-07-2019 дата публикации

Optimizing pipelining result sets with fault tolerance in distributed query execution

Номер: US0010366084B2

Aspects extend to methods, systems, and computer program products for optimally pipelining result sets with fault tolerance in distributed query execution. Distributed computing jobs are optimized by dividing the distributed computing jobs into one or more bubbles for execution. Each bubble can be independently executed, potentially in parallel with other bubbles, when resources to handle the bubble are available. Intra-bubble communication can be streamed between vertices within a bubble. Inter-bubble communication can be stored to durable storage. Bubbles provide a failure boundary for a job graph and re-executing a bubble along with storage of intermediate results in durable storage can be used to recover from failures. When a vertex inside a bubble fails, computation can resume by rescheduling the execution of the failed bubble from the durable inputs for that bubble. Durable storage provides a light-weight failover to handle non-deterministic behavior. Jobs can also leverage streaming ...

Подробнее
20-11-2014 дата публикации

PARTITIONING ONLINE DATABASES

Номер: US20140344221A1
Принадлежит:

Methods, systems, and computer program products are provided for partitioning online databases. Online database operations, such as, for example, SPLIT, MERGE, and DROP, are used to alter the arrangement of partitions in a federated database. A SPLIT operation splits rows at one partition across a plurality of other partitions. A MERGE operation merges rows at a plurality of partitions in to one partition. A DROP operation shifts responsibility for rows of data from one partition to another partition and then drops the rows from the one partition. 1. At a distributed database system including one or more processors and system memory , the distributed database system also including a plurality of database partitions , including a first database partition and a second database partition , in a federation , the federation configured to store a plurality of rows of data , each row of data identified by a federation key value such that the federation stores data for a set of federation key values , each of the plurality of database partitions configured to store any rows of data having a federation key values within a specified subset of the set of federation key values , a method for dropping rows of data from the distributed database system , the method comprising:an act of receiving a partition drop directive indicating how to process at least some of one or more of the plurality of specified subsets of federation key values to drop corresponding rows of data stored in the plurality of database partitions; andan act of executing a drop operation to drop the corresponding rows of data in accordance with the partition drop directive and while the plurality of database partitions remain online, including for each database partition that is to drop corresponding rows of data, configuring one or more other database partitions to store portions of the rows of data to be dropped.2. The method of claim 1 , wherein configuring another database to store portions of the rows of ...

Подробнее
14-07-2015 дата публикации

Scoped database connections

Номер: US0009081837B2

The present invention extends to methods, systems, and computer program products for scoping the context used to access a database partition. Embodiments of the invention enable data isolation using partitions in multi-tenant databases, while relieving client applications from dealing with the partitions. For example, a computer system that includes a distributed database system comprising a plurality of database partitions in a federation receives a context to use when performing database access operations within the distributed database system. The context identifies specified relevant portion of the federation. The computer system also receives a database access operation that is associated with the context. The computer system modifies the semantics of the database access operation in accordance with the associated context, to direct application of the database access operation to the specified relevant portion of the federation.

Подробнее
15-03-2018 дата публикации

Optimizing pipelining result sets with fault tolerance in distributed query execution

Номер: US20180075098A1
Принадлежит: Microsoft Technology Licensing LLC

Aspects extend to methods, systems, and computer program products for optimally pipelining result sets with fault tolerance in distributed query execution. Distributed computing jobs are optimized by dividing the distributed computing jobs into one or more bubbles for execution. Each bubble can be independently executed, potentially in parallel with other bubbles, when resources to handle the bubble are available. Intra-bubble communication can be streamed between vertices within a bubble. Inter-bubble communication can be stored to durable storage. Bubbles provide a failure boundary for a job graph and re-executing a bubble along with storage of intermediate results in durable storage can be used to recover from failures. When a vertex inside a bubble fails, computation can resume by rescheduling the execution of the failed bubble from the durable inputs for that bubble. Durable storage provides a light-weight failover to handle non-deterministic behavior. Jobs can also leverage streaming to increase performance

Подробнее
26-08-2021 дата публикации

System and method for machine learning for system deployments without performance regressions

Номер: US20210263932A1
Принадлежит: Microsoft Technology Licensing LLC

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Подробнее
08-06-2023 дата публикации

Query Optimizer Advisor

Номер: US20230177053A1
Принадлежит: Microsoft Technology Licensing LLC

Methods for optimization in query plans are performed by computing systems via a query optimizer advisor. A query optimizer advisor (QO-Advisor) is configured to steer a query plan optimizer towards more efficient plan choices by providing rule hints to improve navigation of the search space for each query in formulation of its query plan. The QO-Advisor receives historical information of a distributed data processing system as an input, and then generates a set of rule hint pairs based on the historical information. The QO-Advisor provides the set of rule hint pairs to a query plan optimizer, which then optimizes a query plan of an incoming query through application of a rule hint pair in the set. This application is based at least on a characteristic of the incoming query matching a portion of the rule hint pair.

Подробнее
15-06-2023 дата публикации

Query optimizer advisor

Номер: WO2023107175A1
Принадлежит: Microsoft Technology Licensing, LLC.

Methods for optimization in query plans are performed by computing systems via a query optimizer advisor. A query optimizer advisor (QO-Advisor) is configured to steer a query plan optimizer towards more efficient plan choices by providing rule hints to improve navigation of the search space for each query in formulation of its query plan. The QO-Advisor receives historical information of a distributed data processing system as an input, and then generates a set of rule hint pairs based on the historical information. The QO-Advisor provides the set of rule hint pairs to a query plan optimizer, which then optimizes a query plan of an incoming query through application of a rule hint pair in the set. This application is based at least on a characteristic of the incoming query matching a portion of the rule hint pair.

Подробнее
26-10-2023 дата публикации

System and method for machine learning for system deployments without performance regressions

Номер: US20230342359A1
Принадлежит: Microsoft Technology Licensing LLC

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Подробнее
28-12-2023 дата публикации

Query set optimization in a data analytics pipeline

Номер: WO2023249774A1
Принадлежит: Microsoft Technology Licensing, LLC

In a set of data analytics queries, at least a one of the queries comprising more than one operator, and each query being at least one of i) a producer of data for an other query in the set, and ii) a consumer of data from an other query in the set. In such examples, one or more computing devices identify each producer/consumer relationship between the queries. The one or more computing devices identify one or more optimizations among the queries based on the identified relationships. The one or more computing devices then apply at least one identified optimization to at least one of the queries.

Подробнее
28-12-2022 дата публикации

System and method for machine learning for system deployments without performance regressions

Номер: EP4107631A1
Принадлежит: Microsoft Technology Licensing LLC

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Подробнее
05-09-2023 дата публикации

System and method for machine learning for system deployments without performance regressions

Номер: US11748350B2
Принадлежит: Microsoft Technology Licensing LLC

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Подробнее
17-09-2024 дата публикации

System and method for machine learning for system deployments without performance regressions

Номер: US12093255B2
Принадлежит: Microsoft Technology Licensing LLC

Methods of machine learning for system deployments without performance regressions are performed by systems and devices. A performance safeguard system is used to design pre-production experiments for determining the production readiness of learned models based on a pre-production budget by leveraging big data processing infrastructure and deploying a large set of learned or optimized models for its query optimizer. A pipeline for learning and training differentiates the impact of query plans with and without the learned or optimized models, selects plan differences that are likely to lead to most dramatic performance difference, runs a constrained set of pre-production experiments to empirically observe the runtime performance, and finally picks the models that are expected to lead to consistently improved performance for deployment. The performance safeguard system enables safe deployment not just for learned or optimized models but also for additional of other ML-for-Systems features.

Подробнее