You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 18, 2024. It is now read-only.
* docs: refine integrate with jina
* docs: tweek words
* docs: refine structure of jina integration
* docs: create 3 tabs
* docs: add volume mount
* docs: upgrade version
* docs: add embed with docarray
* docs: refine comments add changelog
* chore: bump docarray to 0.13.31
* docs: use mnt add output
* docs: tweek words and refer docarray
* docs: print run artifact id to output
* chore: add changelog
* docs: more text descriptions on artifact and zip
* docs: restructure usage
* docs: restructure usage
* docs: fix output shape
* docs: fix integration docarray host
* docs: tweek words
* docs: remove label for clip training
Copy file name to clipboardExpand all lines: docs/walkthrough/integrate-with-jina.md
+99-28Lines changed: 99 additions & 28 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,7 @@
1
+
# Integration
2
+
1
3
(integrate-with-jina)=
2
-
#Integrate with Jina
4
+
## Fine-tuned model as Executor
3
5
4
6
Once fine-tuning is finished, it's time to actually use the model.
5
7
Finetuner, being part of the Jina ecosystem, provides a convenient way to use tuned models via [Jina Executors](https://docs.jina.ai/fundamentals/executor/).
@@ -8,14 +10,35 @@ We've created the [`FinetunerExecutor`](https://hub.jina.ai/executor/13dzxycc) w
8
10
More specifically, the executor exposes an `/encode` endpoint that embeds [Documents](https://docarray.jina.ai/fundamentals/document/) using the fine-tuned model.
9
11
10
12
Loading a tuned model is simple! You just need to provide a few parameters under the `uses_with` argument when adding the `FinetunerExecutor` to the [Flow]((https://docs.jina.ai/fundamentals/flow/)).
13
+
You have three options:
14
+
15
+
````{tab} Artifact id and token
16
+
```python
17
+
import finetuner
18
+
from jina import Flow
19
+
20
+
finetuner.login()
11
21
12
-
````{tab} Python
22
+
token = finetuner.get_token()
23
+
run = finetuner.get_run(
24
+
experiment_name='YOUR-EXPERIMENT',
25
+
run_name='YOUR-RUN'
26
+
)
27
+
28
+
f = Flow().add(
29
+
uses='jinahub+docker://FinetunerExecutor/v0.9.2', # use v0.9.2-gpu for gpu executor.
uses='jinahub+docker://FinetunerExecutor/v0.9.2', # use v0.9.2-gpu for gpu executor.
40
+
uses_with={'artifact': '/mnt/YOUR-MODEL.zip'},
41
+
volumes=['/your/local/path/:/mnt'] # mount your model path to docker.
19
42
)
20
43
```
21
44
````
@@ -26,19 +49,48 @@ with:
26
49
port: 51000
27
50
protocol: grpc
28
51
executors:
29
-
uses: jinahub+docker://FinetunerExecutor
52
+
uses: jinahub+docker://FinetunerExecutor/v0.9.2
30
53
with:
31
-
artifact: 'model_dir/tuned_model'
32
-
batch_size: 16
54
+
artifact: 'COPY-YOUR-ARTIFACT-ID-HERE'
55
+
token: 'COPY-YOUR-TOKEN-HERE' # or better set as env
33
56
```
34
57
````
35
-
```{admonition} FinetunerExecutor via source code
36
-
:class: tip
37
-
You can also use the `FinetunerExecutor` via source code by specifying `jinahub://FinetunerExecutor` under the `uses` parameter.
38
-
However, using docker images is recommended.
58
+
59
+
As you can see, it's super easy!
60
+
If you did not call `save_artifact`,
61
+
you need to provide the `artifact_id` and `token`.
62
+
`FinetunerExecutor` will automatically pull your model from the cloud storage to the container.
63
+
64
+
On the other hand,
65
+
if you have saved artifact locally,
66
+
please mount the zipped artifact to the docker container.
67
+
`FinetunerExecutor` will unzip the artifact and load models.
68
+
69
+
You can start your flow with:
70
+
71
+
```python
72
+
with f:
73
+
# in this example, we fine-tuned a BERT model and embed a Document..
74
+
returned_docs = f.post(
75
+
on='/encode',
76
+
inputs=DocumentArray(
77
+
[
78
+
Document(
79
+
text='some text to encode'
80
+
)
81
+
]
82
+
)
83
+
)
84
+
85
+
for doc in returned_docs:
86
+
print(f'Text of the returned document: {doc.text}')
87
+
print(f'Shape of the embedding: {doc.embedding.shape}')
39
88
```
40
89
41
-
As you can see, it's super easy! We just provided the model path and the batch size.
90
+
```console
91
+
Text of the returned document: some text to encode
92
+
Shape of the embedding: (768,)
93
+
```
42
94
43
95
In order to see what other options you can specify when initializing the executor, please go to the [`FinetunerExecutor`](https://hub.jina.ai/executor/13dzxycc) page and click on `Arguments` on the top-right side.
44
96
@@ -47,28 +99,47 @@ In order to see what other options you can specify when initializing the executo
47
99
The only required argument is `artifact`. We provide default values for others.
48
100
```
49
101
102
+
(integrate-with-docarray)=
103
+
## Embed DocumentArray
50
104
51
-
## Using `FinetunerExecutor`
52
-
53
-
Here's a simple code snippet demonstrating the `FinetunerExecutor` usage in the Flow:
105
+
Similarly, you can embed the [DocumentArray](https://docarray.jina.ai/) with fine-tuned model:
0 commit comments